If your girl look at you like this, you fucked up really bad and no amount of apology is gonna fix it. RIP.
If your girl look at you like this, you fucked up really bad and no amount of apology is gonna fix it. RIP.
You should also repost it here : https://lemmy.world/c/artshare
An epic fantasy about init systems? You can’t get nerdier that this. More!
Also, really nice art! I never cared about init systems in my life but I am now pro-systemd. You shouldn’t have made the villain so badass looking.
They’ve shown the COSMIC terminal working on their last showcase video
RedoxOS would likely never become feature complete enough to be a stable, useful and daily-drivable OS. It’s currently a hobbyist OS that is mainly used as a testbed for OS programming in Rust.
If the RedoxOs devs could port the Cosmic DE, they’d become one of the best Toy OS and maybe become used on some serious projects . This could give them enough funds to become a viable OS used by megacorps on infrastructures where security is critical and it may lead it to develop into a truly daily drivable OS.
As I said, I briefly used gnome in the far past and just remember being weirded out by the design choices that felt very “Apple like” . So them pulling an “Apple” and doing the “we know better than the user” doesn’t feel out of place.
I was gonna say that it was a third party extension, but then I thought that gnomes users would infer that pretty easily.
Yeah, Cosmic looks really nice. Their app store interface needs a bit of modernization work, but otherwise, it looks well polished.
2 other responses I got confirmed that such thing happens and you say otherwise. Doesn’t Gnome breaks third party extensions that provides users basic functionality that should be in gnome in the first place but the devs don’t want to implement? Is the meme wrong?
Gnome devs : we broke the toilet extension. Your pokemons have nowhere to shit and piss.
Pokemon trainers : why the fuck is the toilet an extension. Shouldn’t it be part of the DE?
Gnome devs : we believe the toilet feature is unnecessary, so it wasn’t and will never be implemented.
Note : I’ve barely used gnome in my life so it’s based on memes I’ve seen about gnome.
If you guys like hiking and stuff, there’s this cool open source app called trail sense on f-droid and it’s just so much feature packed…
I don’t hike, so I only use it for it’s pedometer capabilities and a hypothetical situation where “I might get really lost” but the amount of features it has for hiking and survival is crazy and so I think deserves to be more known.
Photopea. It’s photoshop but on your browser.
I had a hunch that writing the actual Upload/download speed tather than mbps was probably wrong. My bad, my internet provider lingo is rusted.
I don’t have a jellyfin server but 1MB/s (8mbps) for each person watching 1080p (3.6Gb per hour of content for each file) seems reasonable. ~3MB/s (24mbps) upload and as much download should work.
No. Quantization make it go faster. Not blazing fast, but decent.
Completely forgot to tell you to only use quantized models. Your pc can run 4bit quantized versions of the models I mentioned. That’s the key for running llms on at consumer level hardware. You can later read further about the different quantizations and toy with other ones like Q5_K_M and such.
Just read phi-3 got released and apparently it’s a 4B that reach gpt 3.5 level. Follow the news and wait for it to be add to ollama/llama.ccp
Thank you so much for taking the time to help me with that! I’m very new to the whole LLM things, and sorta figuring it out as I go
I became fascinated with llms after the first AI booms but all this knowledge is basically useless where I live, so might as well make it useful by teaching people what i know.
The key is quantized models. A full model wouldn’t fit but a 4bit 8b llama3 would fit.
Yeah, it’s not a potato but not that powerful eaither. Nonetheless, it should run a 7b/8b/9b and maybe 13b models easily.
running them in Python with Huggingface’s Transformers library (from local models
That’s your problem right here. Python is great for making llms but is horrible at running them. With a computer as weak as yours, every bit of performance counts.
Just try ollama or llama.ccp . Their github is also a goldmine for other projects you could try.
Llama.ccp can partially run the model on the gpu for way faster inference.
Piper is a pretty decent very lightweight tts engine that can be directly run on your cpu if you want to add tts capabilities to your setup.
Good luck and happy tinkering!
Stop bragging 😤