I'm taking the radical approach of starting with the problem and finding a solution rather than start with a solution and hit all your problems with it.
Well it wasn't really a teaching revolution. It was a marketing job around a YouTube channel that purported to be a teaching revolution.
The thing is people want more than material. They want the material to be accredited and examined. Otherwise there is no demonstrable credibility from doing it.
And there's a whole world out there of higher quality material with has that accreditation and examination structure around it. And it existed, sometimes for decades in the case of The Open University, before Khan Academy appeared. But it costs money.
Promises are broken, policies are changed and political regimes vary. You need to make sure that you consider the future and not just now. And that means NEVER handing your data over in the first place.
That's easier said than done. Even if you don't directly use Google services, chances are that Big Data is still watching you on every website you go to. And if you have a mobile data plan, your service provider knows exactly where you are 24/7.
As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.
Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."
There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.
Yea, its fundamentally an issue of asymmetric economics.
Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that
But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.
In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.
There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!
That sounds like a great idea. I'd love to be able to contribute the remainder of my monthly AI subscriptions for something like this, especially since some of them bill and refresh their quotas by calendar month.
Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same?
I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week.
I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that.