Hacker Newsnew | past | comments | ask | show | jobs | submit | imiric's commentslogin

Very nice!

Can you share the skill for it?


Why is everyone compelled to write one of these articles? Do they think that their workflow is so unique that they've unlocked the secret to harnessing the power of a pattern generator? Every single one of these reads like influencer vomit.

My workflow hasn't changed since 2022: 1. Send some data. 2. Review response. 3. Fix response until I'm satisfied. 4. Goto 1.


It is OK. I actually love looking around other people’s work. Perhaps, I will never follow exactly but one a while, I get the gotchas where I can steal and adapt to mine. Let it be, let people express. If not for the veterans with years of experience, people coming in recently should find these things something to read up and learn.

> Why is everyone compelled to write one of these articles?

LinkedIn clout.


Ed Zitron's latest piece has a great take on this - basically yes, they thing they've unlocked a great secret and they think they are very smart, when instead they are actually doing the work for LLM, while giving LLM the credit for the outputs of their work.

Documenting what I do is fun and relaxing and for me so I write. Only time I had to share mine was to a friend who wanted was getting into coding lately. https://www.nadeem.blog/writing/workflows

I think your take is overly negative. Regardless of what they think, sharing ones experiences with others is how we advance, both as individuals and as a community/mankind. Talking about AI workflows, I am personally interested in how the people who are happy working with AI work, so that I could also be happier with my work. If they write their workflow, I can either learn from it and improve my work, or learn that they are doing something completely different from what I do, which might explain the disparity between people's experiences with AI, or learn that they are spouting nonsense, reaffirming that it might really be mostly hype. Either way, each one of these is a net positive information for me.

> Do they think that their workflow is so unique that they've unlocked the secret to harnessing the power of a pattern generator?

Yes, just like everyone were thinking their .vimrc was amazing 20 years ago. It is vomit.


Posting .vimrc was actually great. You can quickly scan it to find interesting bits, then you may add those bits in your config.

Now there’s nothing to pick or compare. Just vibes and my shamanic dance is twistier than yours.


Nobody writes about their work thinking the whole world will read it. They write it for their friends, maybe a small group of regular readers, also for themselves. I for one really like it, even if I get bored after reading 5 similar articles, because maybe someone will only ever read one of them, and it’ll help them improve their own work.

I mean, that argument doesn't hold water when you then post it on HN and Reddit.

> I liked YouTube Premium because it was an ethical way to avoid ads on YouTube

Why would you behave ethically towards a company that is anything but?

The slight remorse I feel by not using official YT frontends is towards creators I enjoy watching, who I try to support via other means, if possible. But then again, any creator or business who chooses advertising as their only business model doesn't deserve my support.

Advertising is a scourge on humanity. It corrupts every medium of information by allowing sleazy middlemen to psychologically manipulate one party not just into buying products out of manufactured desire, but into thinking and behaving in ways that serve someone's agenda. It is weaponized via platforms built by adtech companies, which have played a major role in the current sociopolitical instability in the world. It is so insidious that even though it has concentrated incredible amounts of wealth into the hands of a few, most people see it as harmless because they get products and services for "free". To hell with all of that.


What I was doing, was use YouTube Premium registered via a VPN. I was paying equivalent to 2 EUR in Indian Rupees. And I did not feel bad about it because the main users were my kids on our TV. Now my kid uses SmartTube on TV, and YouTube ReVanced on a smartphone (with NewPipe as backup, sometimes or the other is broken). So they lost money.

I do believe the better solution is to go to DIY channel, but yeah. I got Amazon Prime, which gives me free shipping on Amazon. Add on top of that, I can freely support one Twitch stream. So I am going with Critical Role. They also sell their own platform, but it is more expensive than Bezos' deal. It is hard to compete with big tech...

I got a hunch feeling my IPv4 is shitlisted here and there though, but it could also be Linux + Firefox + plethora of extensions. I'll get a new IPv4 soon, so a good time to also clear all my cookies and part with some extensions.


> The slight remorse I feel by not using official YT frontends is towards creators I enjoy watching

That's what I felt bad about. I didn't care if I was depriving Google of money, but I was watching a lot of videos of relatively small channels, and I was watching them with ad block, and I wasn't compensating them otherwise. In a bit of fairness (though not much) I was not making much money at the time.

I agree that advertising is bad for humanity. I hate ads. I don't like the idea that a corporation is weaponizing my psychology to sell me crap I don't need. For the most part I would rather pay for things, but of course I make a lot more money now than I did back in 2015.

I've said it before, but I think it bears repeating: people will pay for things if those things don't suck. I think it speaks to the shittiness of the platforms that people will only use stuff like Facebook and YouTube if they're "free".


> And statistically-speaking, is viable as long as a company keeps its users to a normal distribution.

Doing a bait-and-switch on a percentage of your paying customers, no matter how small the percentage is, may be "viable" for the company, but it's a hostile experience for those users, and companies deserve to be called out for it.


On the other hand, subsidizing high-usage customers with low-usage customers is pretty generous to the high-usage customers, and there's no pricing model that doesn't suck a little.

Pricing tiers suck if your usage needs are at the bottom of a tier, or you need exactly one premium feature but not more. A la carte pricing is always at least a bit steep, since there's no minimum charge/bulk discount (consider a gym or museum's "day pass") so they have to charge you the full one-time costs every time in case that's your only time.

Base cost + extra per usage might be the best overall, but because nobody has solved micro transactions, the usage fees have to be pretty steep too. And frankly, everyone hates being metered - it means you have to think about pricing every time you go to use something.


> We believe that the user experience comes first.

If by "user" you mean advertisers, sure you do. Everyone else is an asset to extract as much value from as possible. You actively corrupt their experience.

The fact these companies control the web and its major platforms is one of the greatest tragedies of the modern era.


I've used ` as prefix for years now. Considering how often you switch windows/panes, I reckon using a single character has saved me hours per year. :D

It rarely conflicts with whatever I'm doing, but I have a binding to temporarily switch it to `C-a` and back, which I almost never use.

Oh, and I've used this themepack[1] for years as well.

Actually, here's my config[2] if someone finds it useful. I can't claim ownership of it, and probably stole it from somewhere I don't remember anymore.

BTW, the author's site https://rootloops.sh/ is certainly... something. :)

[1]: https://github.com/jimeh/tmux-themepack

[2]: https://gist.github.com/imiric/9bd3e5b7fc5e1468d05abc674f42e...


Well said.

> We're a very primitive species, and the forces involved here are genuinely new.

It's absolutely wild to me that we went from inventing flying machines to putting people on the freaking moon in the span of a human lifetime. What we've accomplished with technology in the last 500 years, let alone in the last century, is nothing short of remarkable.

But, yes, in the grand scheme of things, we're still highly primitive. What's holding us back isn't our ingenuity, but our primitive instincts and propensity towards tribalism and violence. In many ways, we're not ready for the technology we invent, which should really concern us all. At the very least our leaders should have the insight to understand this, and guide humanity on a more conservative and safe path of interacting with technology. And yet we're not collectively smart enough to put those people in charge. Bonkers.


This is neat, but the tap to release controls are unintuitive for me. I much prefer the variant of this game that uses hold, drag and aim as input. This allows much greater control, is more engaging, and thus feels more rewarding and fun. Plus, there's no waiting period for the ball to circle back to where you want it to be.

Tangentially, this is also why I dislike the modern trend of auto-shooters and idlers. The twin-stick shooter is by far the superior control scheme for this type of game, yet for some reason people enjoy having less control and engagement. I never got the appeal.


> We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.

This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.

Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.

As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.

> A lot of companies say they are going to change the world; we actually did.

Ugh.


I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.

Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.

Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.

Technology itself is inert. What humans do with technology should be regulated.

IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.


> Would it be moral to attack knife manufacturers?

Apply this to guns.

Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.

AI will get this treatment I’m sure.


>Would it be moral to attack knife manufacturers?

if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.

Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.

But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.


Sibling comment already said it, but yes I was specifically alluding to Altman's decision to allow the US government to use their AI to choose bombing targets without a human in the loop - perhaps this is why the US government double-tapped[1] a school killing 160 girls, all younger than 12, when the school was clearly marked on google maps.

I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.

AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.

[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls


Regardless, "AI" is not doing the killing in that case. Rather, humans have deployed it to control weapons that kill people. There are several layers of indirection there before you can claim "AI kills people". This is the same indirection as when a human chooses to press a button that fires a missile, or stab someone, just with more steps involved.

So you can also be outraged at weapon manufacturers, which is one step closer. Or, you can skip the indirection, and be outraged specifically at people in charge of using this technology, which is my point.

I'm disgusted by this industry as much as you are, believe me. But blaming the companies that produce "AI" for people dying is misplaced. They're certainly part of the problem, but not the root cause.

> AI needs to be opposed

AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.

But even if it did, it's silly to claim that any technology needs to be opposed. This one is potentially more problematic than others because it raises some difficult existential and social questions which we might not be ready to answer, but it's still ultimately on us to control how it's used. We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button, so a probabilistic pattern generator seems trivial in comparison. It's going to be bumpy, but I think we'll manage.


> Regardless, "AI" is not doing the killing in that case. Rather, humans have deployed it to control weapons that kill people.

One of those humans is Sam Altman, which makes him a valid military target.

He's not somebody that released a product and doesn't know what it's being used for. He's selling it specifically to be used as part of killing people.


Right. Let's extrapolate that to Jensen Huang as well, and maybe TSMC, and also ASML engineers, why not.

Do you realize how ridiculous that sounds?


Did the US government ask Huang to buy drone parts for killer drones and Huang said yes? Did Huang offer to optimize the drone parts to make them more effective in killing people? Altman did.

> AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.

They've claimed the term, this is not a useful objection to make at this point. And everyone was fine with calling our shitty little computer vision handwriting parsers "AI algorithms" before LLMs.

> We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button

Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on? "Thanks guys, our democracies are so stable these will literally never be used for a nuclear holocaust, and they might have useful mining applications!"

Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"? Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?


> They've claimed the term, this is not a useful objection to make at this point.

Sure it is. Someone saying that the sky is purple will never be true, no matter how many times they say it. Pushing against this is how we avoid the fabricated mystique around this tech, precisely so that people don't see it as a threat.

> Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on?

You're twisting my words. I never said that I support what "AI" companies are doing. I said that your claim that "AI is killing people" is hyperbolic, and that you're barking up the wrong tree.

Besides, the scientific research invested in nuclear technology has produced far more benefits for humanity than drawbacks. It's very likely that the conversation we're having now wouldn't have been possible without this research. There's an argument to be made that even nuclear weapons and their deployment in WW2 had a more positive outcome than any alternative would've had.

Similarly, the same can be said about the current generation of "AI". For all its potential dangers and harms, whether direct or indirect, it has and will continue to have many positive use cases, some of which we haven't discovered yet. Ignoring this and opposing the tech altogether is throwing out the baby with the bathwater.

The solution isn't banning the tech. It's strongly regulating it, as we've done with many others. Unfortunately, governments move at glacial speeds, and some are deeply entrenched with corporations, so there's conflicts of interest galore, but that's still the most sensible approach to manage it safely.

> Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"?

Sure I can. Any government, organization, or individual can abuse any technology. But you haven't made the case why opposing technology itself would prevent that, versus holding those individuals accountable directly. Until then your comments come across as misplaced fear mongering.

> Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?

So what do you suggest? We stop all tech R&D because governments can't be trusted? That's pure fantasy. No single government would even agree to it since technology is universal. If the US doesn't invent it, another country will. Advancing within this messy geopolitical framework is the only path forward, for better or worse.


> Any government, organization, or individual can abuse any technology. But you haven't made the case why opposing technology itself would prevent that, versus holding those individuals accountable directly

I think we should hold the individuals accountable directly. But we can't. The system is skewing further and further from the point where we could. Look at the Epstein files - everyone on the planet knows that there is a mountain of evidence condemning someone rich and powerful, and nothing will be done about it.

In the meantime, I want to stop handing weapons to the powerful people that we can't hold to account. I don't think we should stop all R&D - but I think "machines that act as if they can think for most practical purposes" are uniquely dangerous. I also used to think the "AI" companies were full of shit, until my work handed me a bottomless anthropic API key to use for claude code. They can successfully navigate novel situations using tools to interact with the world. Tasks like "find me 20 puritanical whitehouse staffers who are cheating on their spouses, using credit card / location history" are now costly only in terms of api tokens. Going the other direction - "Find the organizers of this protest. Using all the information collected by big tech, find an unrelated criminal offence they have committed".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: