Funnily enough, most of the young people I know fall somewhere between those two sides of the spectrum.
I know some actual luddite-tier AI haters that believe it's ontologically evil, and another majoring in Data Science that went to the most recent career fair and told a recruiter "AI will replace you" (I uh don't think he's getting that internship)
And of course many, many, others that fall between the two extremes.
The one thing we can all agree on, is it makes homework a hell of a lot easier :) (well, except the luddite-types, they refuse to use it in any capacity)
I'm a member of a political action committee, where I was brought in as an expert in professional media applications of AI. I've got extensive experience using AI tools in the production of well known entertainment properties (think VFX for film and animation.) Anyway, within the political action committee where is a diverse mixture of people, with about 1/5th of them under age 30. The entire under age 30 set are so AI negative, to such an irrational degree, I have been asked to do nothing and offer no advice that incorporates any technology at all. They are so paranoid. In a not really emotional discussion, a bunch of them erupted in tears, they are so irrational about it.
Are you able to share whether the PAC was Democratic Party or Republican Party aligned? When I first came to America, the headlines were about how Obama’s campaign embraced tech successfully. By now, tech is considered right-wing. If the young ‘uns who burst into tears were on a Republican aligned PAC that would be interesting. It would mean cross-political tech angst.
The biggest irony with telling a recruiter they'll be replaced, is how much easier a data scientist is to replace with LLMs. With their sycophantic nature, execs will eat up whatever "data" the LLMs make up, too.
Young people love AI when it helps them cheat homework, or when used for roleplay and memes. Generating "content" with AI - is generally more hated, especially art and video.
That is a bad counter-example, because its just a poorly conceived statement. You apparently don't hate knives. You hate killing people, which isn't remotely similar.
Using AI to cheat at academics and then hating on people who use AI to cheat on media creation is absolutely hypocritical. Its qualifying hypocritical stupidity like this results in shoving a single vendor's LLM into the browser.
Do they really? Hating on AI slop is a common sentiment on social media, but remember that the opinions you see on social media are often not representative of what the general population thinks at all.
I keep hearing stories about how homework is now useless because every student just gets ChatGPT to do it for them, and from personal experience, I'm inclined to believe them.
I don't believe every student uses a calculator to solve their math homework, so what makes ChatGPT unique here? For certain subjects the ability to cheat has been trivial for a long time, yet there was no crisis.
I asked it about designing a 12 V solar system for a garden shed and it got everything but the broadest of strokes wrong. It figured out there should be a solar panel, a solar charge controller, a battery and some loads, but the wiring was non-sensical and when I drilled in on the solar charge controller settings etc. it completely fell apart. Absolute non-starter for any information you plan on depending on, but good entertainment value and impressive execution.
Same story here, I installed it and ran `gwc auth setup` only to find I needed to install a `gloud` CLI by hand. That led me to this link with install instructions: https://cloud.google.com/sdk/docs/install. Unmistakeable Google DX strikes again.
In theory, comments on Hacker News should advance discussion and meet a certain quality bar lest they be downvoted to make room for the ones that meet the criteria. I am not sure if this ever was true in practice, it certainly seems to have waned in the years I have been a reader of this forum (see one of the many pelican on a bike comments on any AI model release thread), but I'd expect some people still try to vote with this in mind.
Being sarcastic doesn't lower the bar for a comment to meet to not get downvoted, so I wouldn't go thinking people miss the sarcasm without first considering whether the comment adds to the discussion when wondering why a comment is downvoted.
TIL that HA notifications can have associated actions. I have the exact same setup as you, except I only receive the notification and then walk over to the laptop to unblock the agent feeling like a human tool call. This will improve my workflow, thank you.
The notification payload for reference, you will also need a permission input_select (pending/allow/deny) and an automation that triggers upon mobile_app_notification_action:
Not the person you replied to, but I'll stress the point that it is not just what you can add that Claude Code doesn't offer, but also what you don't need to add that Claude Code does offer that you don't want.
I dislike many things about Claude Code, but I'll pick subagents as one example. Don't want to use them? Tough luck. (AFAIK, it's been a while since I used CC, maybe it is configurable now or was always and I never discovered that.)
With Pi, I just didn't install an extension for that, which I suspect exists, but I have a choice of never finding out.
IME CLAUDE.md rarely gets fully honored. I've left HN comments before about how I had to convert some CLAUDE.md instructions to pre-commit deterministic checks due to how often they were ignored. My guesstimate is that it is about 70 % reliable. That's with Opus 4.5. I've since switched to GPT-5.2 and now GPT-5.3 Codex and use Codex CLI, Pi and OpenCode, not CC, so maybe things have changed with a new system prompt or with the introduction of Opus 4.6.
reply