Either LFM2.5-1.6B-4bit or Qwen3.5-2B-8bit or Qwen3.5-4B-4bit
Though, I don't see any references to Gemma at all in the open source code...
I would really like to know what people use these small and tiny models for. If any high-karma users are reading it, would you consider posting Ask HN?
very limited amount of use cases, perhaps. As a generalized chat assistant? I'm not sure you'd be able to get anything of value out from them, but happy to be proven otherwise. I have all of those locally already, without fine-tuning, what use case could I try right now where any of those are "very effective"?
You can use a small coding model to produce working code with a deterministic workflow (ex: state machine) if you carefully prune the context and filter down what it can do per iteration. Instead of letting it "reason" through an ever growing history, you give it distinct piecemeal steps with tailored context.
I think this can be generalized to:
Anything that can be built from small, well understood pieces and can be validated and fixed step by step. Then the challenge becomes designing these workflows and automating them.
(I'm not there yet, but one thing I have in mind might be a hybrid approach where the planning is produced by a more expensive model. The output it has to produce are data driven state machines or behavior trees (so they can be validated deterministically). Then it offloads the grunt work to a small, local model. When it's done, the work gets checked etc.)
Claude Code is a Desktop app as well.
For the user it's just important that the small grimlin that sits in the Ente app is not as smart as the grimlin that sits in the Claude app.
Helpful writeup here: https://pvieito.com/2026/01/inside-claude-cowork (I am not the author)
> Use Claude Code where you work
> Desktop Termianl IDE WEb and iOS Slack
Not that it is important any way ¯\_(ツ)_/¯
I installed it and it's none of that. It is a mere wrapper around small local LLM models. And, it's not even multi-modal! Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).
What's the target audience here? Your average person doesn't care about the privacy value proposition (at least not by severely sacrificing chat model's quality). And users who do want that control can already install LMStudio/Llama.cpp (which is dead simple to setup).
The actual release product should've been what's described in "What's next" section.
> Instead of general chat, we shape Ensu to have a more specialized interface, say like a single, never-ending note you keep writing on, while the LLM offers suggestions, critiques, reminders, context, alternatives, viewpoints, quotes. A second brain, if you will.
> A more utilitarian take, say like an Android Launcher, where the LLM is an implementation detail behind an existing interaction that people are already used to.
> Your agent, running on your phone. No setup, no management, no manual backups. An LLM that grows with you, remembers you, your choices, manages your tasks, and has long-term memory and personality.
I think they did. If you start the download and then open the sidebar and/or background the app, the download progress bar disappears and is replaced by the download button. If you press the download button again, the progress bar reappears at the correct point.
I find that Claude often makes little statefulness mistakes like that. Human developers do too, but the slower and more iterative nature of human development makes it more likely that that would get caught.
This probably could have been one-shotted with Sonnet, not even Opus. Given how over indexed they are on LLM coding, Haiku might even be able to do it.
This is actually an interesting coding model benchmark task now that I think about it.
If it's so great, why is there so little viscera documenting it's greatness? Just lots and lots of words.
Coding is definitely not over IMO. Not yet anyway
There is truly nothing original here and the product doesn't have a chance in hell of earning money. Local LLMs on-device will be dominated by the device vendors, whose control of the hardware stack combined with their ability to subsidize billions of dollars of machine learning research gives them an unfair advantage. Apple knows what the next generation of silicon will deliver, and their ML engineers are already hard at work building models that will be highly optimized for that silicon a year or two ahead of time. Open source models are really great and are backed by well funded labs; however, delivering these models on-device in a way that pleases users will never be easier than it is for the vendors of the devices.
Plus, device vendors have ways of making money from local LLMs that third-party app providers do not. They can make their local LLM free and earn money on the hardware play, without skipping a beat on the billions of dollars of ongoing R&D. I don't see how third party app vendors make money here when they will be competing with the decent, totally free alternative that Apple and Google (and Samsung etc.) will load on in the next year or two.
Same with Kagi. Thats where Kagi news was born.
I quite like the ethos, but this Ensu definitely seems underbaked.
But where are they! https://ente.com/about
Small team, rooting for them
But sure, making money with standalone "local first is our headline feature" will be incredibly hard against those, no doubt about that. In light of the limited quality of what local models can achieve, the privacy bonus just won't compel many to pay. But that only means that this "morning with Claude" you are suggesting might be just the right amount of investment for the result you'd realistically expect. But is that so bad? I'd argue the reverse: bundling up the low hanging fruit but not by some hobbyist who will lose interest two weeks on but by a company big enough the keep it going while small enough to not be a VC furnace that will inevitably turn on users once the runway runs out (*), that's an opportunity to fill a niche few others can. Valuable for users who don't want to roll their own deployment of open source models (can't, or unwilling to commit to keeping them up to date, assuming that Ente does keep that ball rolling), and also valuable for the company of the investment actually is so low that it pays by raising awareness for their other products that apparently do earn them money.
(*) I was googling around a little wondering if they actually are as close to bootstrapped as they seem on the surface, and yes, that's supposedly the core idea [0], but despite that they also took 100 kUSD in "non-diluting" (basically a gift then?) from Mozilla with the explicit goal "to promote independent AI and machine learning" [1]. So not a CEO whim but following up to a promise made earlier. If they actually did avoid spending all that money on a one-off but went smaller planning to keep it current for a longer time horizon, I'd congratulate them on an excellent choice.
[0] https://ente.com/blog/5-years-of-ente/
[1] https://ente.io/blog/mozilla-builders/
The hn discussion for [1] seems to be completely missing the point, that Mozilla program isn't about funding an image host (yeah, I'd also prefer if Mozilla focused on the Browser and perhaps Thunderbird, but the foundation is what it is): https://news.ycombinator.com/item?id=41681666
We have not seen a tidal wave of untechnical people vibe coding up their own software solutions.
When my little brother who is a drummer, and has never even looked at "code" before, had claude on-shot a python app that let him download songs on youtube, extract the stems, collect tempo/key/etc information, then feed that into his DAW, all without ever looking at code, knowing what any of it did, etc., I knew that we were about to see a LOT of single-use applications.
I'm not against it, honestly. I have always written little one-off scripts and apps that accomplished something faster than manually, now that those one-shots are possible with an LLM in seconds sometimes, it makes all my personal scripts so much easier... that said, I definitely read the scripts that are output, and attempt to step through them in a debugger before assuming it is all good.
That to me is more valuable than code vibe coded by Claude in one afternoon.
I do agree that more local LLM options are always better.
And I don’t think they are trying to hide the strategy to have paid tiers, that’s what they did for their other product.
(Though I think this announcement is sufficiently unpleasant I'm starting to reconsider)
Have a comparison chart to Ollama, LMStudio, LocalAI, Exo, Jan.AI, GPT4ALL, PocketPal, etc.
Ideally if you "participate" in the network, you would get "credits" to use it proportionally to how much GPU power you have provided to the network. Or if you can't, then buy credits (payment would be distributed as credits to other participants).
That way we could build huge LLMs that area really open and are not owned by any network.
I would LOVE to participate in building that as well.
This was posted the other day, but only briefly made the front page - seems kinda like what you’re talking about
Although the ability to use large models "for free" sounds pretty rad.
But the alternative is "not having Email", because Email can be used by the same bad actors.
Going to give this a try...
Does this seem sound?
Here’s where it was added to PrivacyGuides - https://github.com/privacyguides/privacyguides.org/issues/36.... The person opening the issue is the CEO of ente. So the CEO of ente gets his company mentioned in PrivacyGuides back when it was new and that makes it more legit?
The discussion is not all that relevant as PrivacyGuides does not rely solely on community input. The core team pretty much generates content and lists recommendations based on (what they claim is) their own research (which isn't saying much).
The forum and community really give us a lot of external insights, with the voting system letting us poll how popular something is.
While we put a very heavy importance on the community consensus, it is mostly up to the team to decide what comes and goes, where more heavy decisions require more votes...
A reason why it has never really been written out is that policies can be gamed, and the team really wants to be able to veto decisions...
As far as "evaluating"/reviewing tools the methods to do so are not documented...
https://discuss.privacyguides.net/t/32774this seems self-contradictory
https://en.wikipedia.org/wiki/Comparison_of_OTP_applications
They just store tokens, without other FA at "worst" you get locked of your account but nobody else has access either. You're also supposed to, as good practice, not be limited to token generation and typically have a dozen or so of recovery tokens. Also if they were somewhat not working at doing the 1 task they should do, namely generate tokens, then you won't be able to use them so it won't even be added.
So... I might be missing something, can you please explain what worries you and why I should thus worry too?
Valid concerns. In the case of Ente Auth though, it is used by folks working at CERN [0], who also sponsored a recent security audit: https://ente.com/blog/cern-audit/
[0] https://cern.service-now.com/service-portal?id=kb_article&n=... / https://auth.docs.cern.ch/trouble-shooting/2fa-tips/
What I'm missing is a way to create and use Passkeys across devices. My use case does not support creating a new Passkey on every device, I need to sync them via servers I control. The system that supports that will be the system that I migrate to.
Expressly harvesting creds through a 2FA app seems a little more direct.
When the comments here say "there's no value because anyone could've compiled llama.cpp", you can see how detached from reality these people are.
Even jumping through the hoops to get an app on Play Store and Apple Store — an app that I can tell my friends to look up and download — is worth a lot.
An app that is also is available on Mac and PC, mind you.
I'm an ex-Google/Meta/Microsoft/Roblox software engineer, and I couldn't be bothered to do any of that.
Neither could the rest of HN. But I'm not the one complaining about lack of novelty or value in this proposition.
However, it’s a bit confusing because, for example, a larger LLM model was downloaded to my smartphone than to my computer. It would probably make the most sense if the app simply categorized devices into five different tiers and then, depending on which performance tier a device falls into, downloaded the appropriate model and simply informed the user of the performance tier. Over time, it would then be possible to periodically replace the LLM for each tier with better ones, or to redefine the device performance tiers based on hardware advancements.
A little bit of cleanup on their site to break out "Ente, our original photo sharing app" from the rest of their apps would do wonders, because I had to search around on the announcement to find the download for this app, which feels about like trying to find the popular Ente Auth app on their website
EDIT: and there are long-standing bugs like this one, unaddressed: https://github.com/ente-io/ente/issues/3087
https://github.com/ente-io/ente/blob/f254af939ff6950b63edf5f... Here is the system prompt, kinda embarassing
Helping non-technical people get off of ChatGPT.com and using increasingly better local models seems worth celebrating and continued iteration.
It's part for the course, yet I'm still surprised.
My favorite so far is a commenter proudly touting their own run-LLM-locally app to prove that they "did it already".
An app which is desktop only.
I am at loss when it comes to arguing with people like that.
This does the same for language models.
The people who got tired of waiting for "any half capable engineer" to do so.
>A desktop app for running Large Language Models locally
>desktop app
Make it an Android and iOS app, get it published in the respective stores, add cross-device sync, and then you'll have a comparable proposition.
That is, when I will be able to tell my friends to look up that Gerbil app and use that on their phone instead of ChatGPT when there's no service.
Can you post them here?
They also have a TOTP auth app?
If their photos app stopped crashing and they pursued basic feature parity between their iOS and desktop apps (IMO table stakes for a photo sync service) I'd have no issue recommending them. Instead, it seems like every so often they just branch off into a new direction, leaving the existing products unfinished. It's like Mozilla-level lack of focus.
We'd like to fix the crashes.
Sorry for the troubles.
https://github.com/Arthur-Ficial/apfel
Apple Ai on the command line
I'd love to know a few more local LLM apps that are available on Android and iOS and Mac/PC under the same branding that I can point my non-technical friends to as a ChatGPT alternative that works offline (but still has sync across the devices).
Could you recommend a few?
> Ente is becoming like Proton: too many products and a lack of focus, leading to lower quality and not delivering what customers want
https://github.com/ente-io/ente/discussions/552#discussionco...
If you have any follow up questions, please do ask.
> I think it's important to offer a complete package
I agree. But doesn't that start with making one product (Ente Photos) good enough for people to actually be able to migrate? If Ente Photo really is your "home ground", as you say, shouldn't you prioritize accordingly? Specifically, in your response[0] to the GitHub issue I linked you say
> to pull it off with the finesse we would like, it will take us more than a quarter. […] I will mark this feature as unplanned until we have more engineering bandwidth.
But it seems you really do have the engineering bandwidth! You've just been prioritizing other products besides Ente Photos. I do understand that folder nesting in particular is a non-trivial change and if you look at the discussion on GitHub, I've actually been defending you rather heavily. But it's becoming increasingly difficult to do so – quite frankly, I am starting to doubt your product management is entirely on the right track if in 3 years you can't dedicate 3 months to the feature request with the highest number of votes by far. More generally, I am losing trust that any of your products will see enough polishing any time soon. (So I will definitely not put time into migrating anything else if I can't even migrate my photos yet.) And it seems I'm not the only one thinking that – your reputation among enthusiasts is starting to take a hit.
> There is no way Ente is going to let anyone else build a better photo app.
Words to live by! :-)
[0]: https://github.com/ente-io/ente/discussions/552#discussionco...
For a bootstrapped, engineering-driven company like Ente, product offers the best leverage for growth. We are not P0-ing nested folders right now because we believe there are areas within the photos app investing into which will provide higher revenue returns, that we can re-invest into increasing our engineering bandwidth.
Now I understand the disappointment around us not prioritizing a feature that is blocking you from even using the product. It is a loss for all parties, but it is important for us to plan long-term. And while we're not prioritizing this specific feature, I don't think it is fair to say that we do not invest time into polish. We do care about our craft[0][1][2].
In case it helps, here's our product lead addressing the timelines for nested albums, and our perspectives around organization: https://www.youtube.com/watch?v=3lkTspvi_mM&t=367s
[0]: https://ente.com/blog/offline-gallery-faster-ml-family-feed-...
[1]: https://ente.com/blog/likes-comments-admin-settings-ente-pho...
[2]: https://ente.com/blog/introducing-rituals-public-links-ente-...
So how would nested tags interact with albums? Will I also be able to share (or contribute to) tags like I can share (or contribute to) albums? Actually, aren't albums already "tag-like" in the sense that photos can be assigned to multiple albums at the same time?
> And while we're not prioritizing this specific feature, I don't think it is fair to say that we do not invest time into polish. We do care about our craft[0][1][2].
This was overly harsh, my apologies!
I've found https://github.com/alichherawalla/off-grid-mobile-ai but haven't tried anything in this space yet.
Absolutely no one called them crazy.
hundreds of local llm apps exist
the what's next section acts novel but all of them have been achieved or created in some format already: you can run a local llm on a phone and connect it to a agent
Then moved to pocket pal now for local llm.
How does it compare to Jan AI for example? or LM Studio? or ????
I have a phone in a drawer I could install termux and ollama on over tailscale and then I'd have an always on llm for super light tasks.
I do really long for a private chat bot but I simply don't have access to the hardware required. Sadly I think it's going to be years to get there..
If Ente is reading this : please add requirements to make it run (how many RAM, etc.)
Come onnnnnn. I would rather read a one line "Check out our offline llm" rather than a whole press release of slop.
This looks very neat. I'm not familiar with the nitty gritty of AI so I really don't understand how it can reply so quickly running on an iPhone 16. But I'm not even going to bother searching for details because I don't want to read slop.
So you look down you see a tortoise. It's crawling towards you.
It requires a Firefox add-on to act as a bridge: https://addons.mozilla.org/en-US/firefox/addon/ai-s-that-hel...
There is honestly not much to test just yet, but feel free to check it out here, provide feedback on the idea: https://codeberg.org/Helpalot/ais-that-helpalot
The essence works, I was able to let it make a simple summary on CMS content. So next is making it do something useful, and making it clear how other plugins could use it.
Also: "Your AI agent can now create, edit, and manage content on WordPress.com" https://wordpress.com/blog/2026/03/20/ai-agent-manage-conten...
I'm talking about connecting Ollama to your wordpress.
Not via MCP or something that's complicated for a relatively normal user. But thanks for the link.
If the new Wordpress feature would allow for connecting to Ollama, then there is no need anymore for my plugin. But I don't see that in the current documentation.
So for now, I see my solution being superior for anyone who doesn't have a paid subscription, but has a decent laptop, that would like to use an LLM 'for free' (apart from power usage) with 100% privacy on their website.
For when wordpress doesn't have enough exploits and bugs as it is. Also why bother with wordpress in the first place if you're already having an LLM spit out content for you ?
You can check the code for exploits yourself. And other than that it's just your LLM talking to your own website.
> Also why bother with wordpress in the first place
Weird question, but sure, I use WordPress, because I have a website that I want to run with a simple CMS that can also run my custom Wordpress plugins.