Hacker Newsnew | past | comments | ask | show | jobs | submit | nateb2022's commentslogin

I've tried their "shimmer" site https://shimmer.poolside.ai (seems in the same vein of products as AI Studio/ Repl.it)

Either the harness or the models are very bad in it, I'd say they feels less capable than Gemma4-E2B in virtually any harness. The larger model would plan out some steps and never actually perform them even when prompted to several times. The smaller model actually got more done. My guess is it's the harness, since you've had a good experience. Haven't tried the pool cli yet.


There's no GGUF available, but the process shouldn't be too hard from the provided .ckpt PyTorch checkpoint.



Looks like it was deleted, here is another https://huggingface.co/DJLougen/talkie-1930-13b-GGUF/tree/ma...

After downloading, it looks like it uses a custom "talkie" model architecture that is not supported at least by llama.cpp.

Edit: This one has now been deleted as well for some reason...


Update: here is a patched llama.cpp and quantized model for desktop use: https://github.com/solwyc/talkie-1930-13b-it-q5

(2025)

Low quality AI generated reddit post

$AAPL down almost 1% after-market on this news

which news? this one or the daily middle east blunder.

that's a rounding error

Look like it briefly went down to above what it started today at.

That is not terribly significant at all.

"it's priced in" - lol

"The market sees all, knows all and will be there from the beginning of time until the end of the universe (the market has already priced in the heat death of the universe)."

And I would add that the main criticism:

> LLMs and LLM providers are massive black boxes... No trust that they won't nerf the tool/model behind the feature... No trust they won't sunset the feature (the graveyard of LLM-features is vast and growing quickly while they throw stuff at the wall to see what sticks)

Doesn't really apply to the article regarding Claude Code Routines in particular. Should this feature disappear, it should be trivially easy to setup a similar pipeline locally, using a cronjob to run opencode configured to use a local LLM. Easy. I have no qualms using a convenient feature I could reimplement myself, it saves me time.


LM Studio shipped this update. Under settings make sure you update your runtimes.


Thank you both!!


I'd recommend using the instruction tuned variants, the pelicans would probably look a lot better.


> Very impressed by unsloth's team releasing the GGUF so quickly, if that's like the qwen 3.5, I'll wait a few more days in case they make a major update.

Same here. I can't wait until mlx-community releases MLX optimized versions of these models as well, but happily running the GGUFs in the meantime!

Edit: And looks like some of them are up!


absolute n00b here is very confused about the many variations; it looks like the Mac optimized MX versions aren’t available in Ollama yet (I mostly use claude code with this)



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: