Hacker Newsnew | past | comments | ask | show | jobs | submit | LeCompteSftware's commentslogin

"Enforce" yes but the point is that this fork clearly violates broader principles and conventions around respecting clearly active trademarks. Nobody is demanding a lawsuit in French court or any particular legal consequences. But it is totally valid and reasonable for an international company like Cloudflare to crack down on hosting his website: they have French customers.

Also it's really not a finder's-keeper's thing with trademarks and international borders. If someone trademarked Notepad++ in the US and released some janky port with the Notepad++ name, Don Ho could likely still win in US court. Most reasonably knowledgeable US consumers who are plausibly in the market for a Windows text editor are at least superficially familiar with "Notepad++" as the name of a well-regarded software product. I know we travel in certain circles, but there is a reason this guy wants to use "Notepad++" and not "MacnotePlus - A fork of Notepad++ for MacOS." It's a famous name.


To be clear in the GitHub thread Don Ho repeatedly encouraged him to do this, and said it was cool that he was trying to bring Notepad++ to Mac! Just don't make it look like Don Ho and the rest of the team is responsible for any quality issues. Don't use the logo!

"Objective-Notepad" was right there.


> "Objective-Notepad" was right there.

It still is. There's only a handful of hits on Google for that, too.

You should do it. I'd do it if I had a Mac and used Notepad++ ;-)


Objective C is a nice language, it's a shame it only really caught on because apple bought next

The smarmy dishonesty about "expanding the Notepad++ brand" actually is selfish and ill-intentioned. Perhaps he is too young and naive to fully understand that he is being parasitic. But naivety is a well-travelled path towards malice.

Regardless, he absolutely deserves to be shamed on GitHub for this. I don't like the online culture of public shame and sandbagging - I think this GitHub thread should be closed now that it's viral - but sometimes people actually do things they should be ashamed of. This needs to be a tough lesson.


I'm spamming this everywhere - taken from his blog:

> I've shipped fintech and risk products at Moody's, BNY, AxiomSL, Amex and many more. I've built platforms, designed user experiences, assembled portfolio analytics and worked on professional services teams.

Also' he's not young. Check his github avatar


You know, what's frustrating is that when I first contemptuously dismissed "Notepad++ for MacOS" as a trademark violation I did skim that stuff and accordingly just sort of assumed the port was technically legitimate, but disrespectful of copyright. But of course it was vibe-coded, and apparently chock full of stupid bugs that would have been caught with adequate manual testing. Why wouldn't I assume otherwise?

This from his website is pretty funny:

  These days I'm deep in multi-agent AI and honestly it's changed everything. I build with both hands, one on the code, one on the vision. I can finally bring to life ideas I've been carrying around for years that always needed too many people and too many quarters.
The first well-known software he vibe-coded is a buggy port of something a talented human spent many decades hand-crafting. The slop project is completely devoid of creativity or imagination, and it's going down in public flames because he was stupid about copyright. Kind of cartoonish, actually.

It sounds like BS. Guy’s done it all if you believe his resume.

"I will give you one week to change the name."

"No, I'm not going to do that."

"Okay fine, I'll report you to Cloudflare now."

"BROOOOOOOO you said you'd give me a week?!?!"


It looks like it went more like this:

"Stop using my trademark." [1]

"OK, give me a couple of weeks. I was intending to expand your brand." [2]

"No. I've reported this to your CDN." [3]

---

[1]: This is the correct way to handle things.

[2]: This has the appearance of being evidence of -deliberate- fuckery.

[3]: This kind of action is the inevitable result of deliberate fuckery.


We have found the limits of agentic engineering. Changing a logo on a website apparently takes weeks.

Funny how the vibe-coding speed grinds to 0 the moment people catch on to their bullshit. A name change requires a week but shitting out 200 commits with Claude takes barely a month.

This comment really put it into perspective to me. I wouldn't have phrased it better myself

OP is being a bit tongue-in-cheek, I believe they mean that some vibe coders really want to be abstracted away from their own jobs, and are very much not interested in computer-scientific abstraction.


It is easy to overinterpret this based on the headline, the doctors were actually at a slight disadvantage. This isn't how they normally work, this is a little more like a med school pop quiz:

  An AI and a pair of human doctors were each given the same standard electronic health record to read – typically including vital sign data, demographic information and a few sentences from a nurse about why the patient was there. The AI identified the exact or very close diagnosis in 67% of cases, beating the human doctors, who were right only 50%-55% of the time.... The study only tested humans against AIs looking at patient data that can be communicated via text. The AI’s reading of signals, such as the patient’s level of distress and their visual appearance, were not tested. That means the AI was performing more like a clinician producing a second opinion based on paperwork.
"I don't know, let's run more tests" is also a very important ability of doctors that was apparently not tested here. In addition to all the normal methodological problems with overinterpreting results in AI/LLMs/ML/etc. Sadly I do think part of the problem here is cynical (even maniacal) careerist doctors who really shouldn't be working at hospitals. This means that even though I am generally quite anti-LLM, and really don't like the idea of patients interacting with them directly, I am a little optimistic about these being sanity/laziness checkers for health professionals.

Also, this is not how ER doctors work? They are not trained for this, nor does it reflect their day-to-day performance. If they would work like this, perhaps they would know a bit more about the nurse writing down those notes, and the kinds of things that particular nurse is likely to miss or overemphasize - just as an example.

The article gives a neat example: In one case in the Harvard study, a patient presented with a blood clot to the lungs and worsening symptoms. Human doctors thought the anti-coagulants were failing, but the AI noticed something the humans did not: the patient’s history of lupus meant this might be causing the inflammation of the lungs. The AI was proved correct.

Which is nice and all, but in the presence of a blood clot, I can understand that treating inflammation instead is not the first thing on a doctor's mind, what with blood clots being potentially life threatening and all. It raises the question; was this a real-life case, and what happened to that patient? Since this is a case for which the correct diagnosis is known, it was eventually correctly diagnosed - presumably then the patient did not die of a blood clot, nor of an uncontrollable fever.

Also, how representative is a patient with Lupus? According to House, MD, it's never Lupus.


The interactivity seems to be a substitute for coherent global organization (and a rationalization by the human author for not reviewing the LLM's strange stylistic / wording choices). It's incredibly distracting, and fatally weakens the argument! The entire time I was thinking "ok, then we just do the same old scrum on specs instead of code, seems like the human management advantages are still as relevant as ever." Perhaps if this were written as an essay the author could have collected their own thoughts a little better and addressed this point. They only did, offhandedly at the end.

UGH I tried to copy-paste that part ("Start writing specs in a way..."), but I couldn't because of this obnoxious fucking interactivity! Why write an essay if people can't share the parts they find interesting???? Again this stuff is designed to impede critical long-distance thought by throwing up a bunch of short-range distractions. It even works on the humans and LLMs writing it. This essay really is incoherent.

Also as someone who used to work manual QA for fairly sensitive finance/pharmaceutical software, this part is darkly funny (I actually could put it on my clipboard, god I hate JS):

  Then · circa 2001
  6 weeks
  Submit code. Wait for QA. Get a bug list. Negotiate. Fix. Wait again.

  Now · 2026
  6 seconds
  Type, save, hot-reload, AI suggests a fix, tests run in parallel. The build itself talks back to you.
"AI suggests a fix" but who finds the bug? AI, I suppose. Good luck with that. Meta is desperately training AI to use a mouse correctly by snooping on its tens of thousands of its own workers. A very important part of my job was clicking on stuff. Maybe in a few years of mass surveillance AI will figure it out. Until then, it seems like Mantyx needed to hire a human who knows how to use a computer and review this website. It's incredibly hard to use because it's obviously vibe-coded. HIRE HUMANS TO AT LEAST LOOK AT YOUR SOFTWARE. Even if it's just a website.

Speaking of, it's so depressing to see this written by an actual company trying to sell stuff:

  Treat agents as teammates
  Specify, dispatch, and verify. Stop treating AI as autocomplete; start treating it as a junior engineer who works at machine speed.
Junior engineers: do not work at Mantyx! They have announced their intentions to mistreat you. It's LLM-boosted corporate hideousness.

Hmm I think all these replies are overcomplicating things.

At a group level, some kids are slower at this Stop/Go task than others. The group difference appears to be this increased broad-scale brain activity: the slow group is overall more prone to distraction and daydreaming.

However, at an individual level, slowing down on the task means increasing your focus (and decreasing brain activity in irrelevant regions), regardless of whether you were in the slow group or the fast group. So the group-level difference is not necessarily as profound as it might appear, and applying "slow group" with too broad a brush means you're going to sweep up some kids who are naturally cautious and focused.


I obviously don't know what the codebase looks like, but

a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.

b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.


Consciousness is not definable because we don't know enough about it. That doesn't mean it can't be discussed; we didn't have a good definition of "number" until the 1800s. That didn't make arithmetic meaningless because people had an understanding of the concept. The lack of formal definition pointed to a gap in logic that took thousands of years to be filled. Likewise there is a gap in experimental neuroscience that will take many decades to be filled.

FWIW as someone in the "first camp" my real claim is that many animals are meaningfully conscious, including all birds and mammals, and no claims of LLM consciousness are even bothering to reconcile with this. It is extremely frustrating that there are essentially two ideas of consciousness floating around:

- the scientifically interesting one: a vague collection of cognitive abilities and behaviors found in all vertebrates, especially refined in birds and mammals

- the sociologically interesting one: saying "cogito ergo sum" in a self-important tone

Claude has the second type in spades, no doubt. The first is totally absent. And I have a good dismissal of the second type of consciousness: it appears to be totally absent in all conscious animals except humans. So it is irrational and unscientific to take this behavior as a sign of consciousness in Claude, when Claude is missing all the other signs of consciousness that humans actually do have in common with other animals.

Sometimes I seriously wonder if people at Anthropic consider dogs to be conscious. Or even Neanderthals.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: