Funny how much stuff AI can democratize whose only prerequisite is a willingness to spend effort learning to use them, for free.
But now "needing" to pay actual money, for an AI agent to write code for you, preventing you from actually learning, means that it is accessible to less people than it ever was before (you can't possibly beat "free" for "democratization" purposes)
what if it's something correct that doesn't have a counterargument, like "photons with a 450nm wavelength are perceived as red by the average human eye"
And how would it know if a counterargument exists anyway, and if it actually does make sense?
but some tokens are only not allowed in certain contexts, not others.
You might be talking about how to defuse a bomb, instead of building one. Or you might be talking about a bomb in a video game. Or you could be talking about someone being "da bomb!". Or maybe the history of certain types of bombs. Or a ton of other possible contexts. You can't just block the "bomb" token. Or the word explosive when followed by "device", or "rapid unscheduled disassembly contraption". You just can't predict all infinite wrong possibilities.
And there is no way to figure out which contexts the word is safe in.
If you're syntax checking every token, you're doing it AFTER the llm has spat out its output. You didn't actually do anything to force the llm to produce correct code. You just reject invalid output after the fact.
If you could force it to emit syntactically correct code, you wouldn't need to perform a separate manual syntax check afterwards.
how do you disallow it from generating specific things? My point is that you can't. And again, how do you stop it generating certain tokens, but only in certain contexts?
You would need to somehow analyze the prompt, figure out that the user is asking for an addition of two numbers, and selectively enable that filter. If that filter was left enabled permanently then you'd just functionally have a calculator.
But the analysis of the prompt itself is not a task that can be reliably automated either, for the exact same reasons the original model couldn't consistently do addition properly.
So your solution has the exact same problem as the original. If you ask for an addition, you can't be sure that you will get numbers (you can't be sure the filter will always be enabled when needed). You just shifted the problem out to a separate thing to be "left as an exercise to the reader" and declared the problem trivial.
From my research you can always fit it in a 32x32 multiply, you only need the extra bit at compile time. The extra bit tells you how to adjust the result in the end, but the adjustment is also a constant.
A Boeing 777 burns 300g of fuel per second per engine, while taxiing on the ground... so 2 gallons gets you somewhere between 10 and 15 seconds of taxiing.
The first argument really really does not make sense.
You can also increase economies of scale by building out solar farms, and using them for something useful, instead of wasting it on guessing random hashes.
Saying that wasting energy is fine as long as you get it cleanly doesn't change the fact that you're still wasting it.
But now "needing" to pay actual money, for an AI agent to write code for you, preventing you from actually learning, means that it is accessible to less people than it ever was before (you can't possibly beat "free" for "democratization" purposes)
reply