This is not a critque of the work, but I have now been often encountering category theory in many of the research topics I'm working on, and even as somebody who majored in math, I sort of feel like it doesn't add much. I know ML frameworks intimately, and you really don't need category theory to describe them. But this is maybe (probably) a failure of mine, because I have not yet groked what category theory is really bringing to the table.
Category theory isn't usually intended to add things. Often, what category theory brings to the table is connecting different branches of math. (It's a bit similar to abstract algebra in that way, but at a different level.)
For example, the lambda calculus is the base for many useful programming languages. But the lambda calculus maps to a "closed Cartesian category". And many, many other interesting things in math can be mapped to a closed Cartesian category.
So now you can ask, "What if affine or linear logic were a programming language?" And the answer is, "You'd get a language with safe resource management, like Rust." Or you might ask, "What if probability were a programming language?"
Or on a smaller scale, a parameterized collection type with "map" is a functor. Add a single-element constructor and a "flatten" operation, and you have a monad. Functions with parameterized types are often natural transformations. And so on. This can then be directly analogized to constructs in different areas of math. Which might sometimes produce an interesting idea or two.
So category theory isn't always used to add something new and profound. Sometimes it's just a handy way to see something already there.
It helps to not make a cluttered mess of an API. Breaking up the problem space with a few composable functions that can express all that you want to express is a definite win.
Category theory is all about relationships and structural patterns. so its useful when you want to interoperability and composition between systems i.e. invariants under transformations, etc.
Without reading too much into what this framework does, I'd say category theory could be useful for some ML problems (i.e. layer composition, gradient propagation, etc.) - but I'd think it would be more useful as an analytical tool than as actual lib/code structures.
> as somebody who majored in math, I sort of feel like it doesn't add much. I know ML frameworks intimately, and you really don't need category theory to describe them.
Not that the work linked here is doing this, but.. using category theory to describe the approach could make a lot of sense even if it's not required. The idea would be that instead of inventing the next architecture from scratch maybe you aim to correct problems in the current generation of systems by showing some generalization/transformation that doesn't have the problems.
Now that "more is different" is something that everyone believes implicitly, the alternative to an abstract existence proof is literally millions/billions spent on a proof of concept that may not work. Doesn't mean we need to use category theory, I mean there's reasons that might be a good idea, but if this were doable without some big change in perspective then it seems likely it would be done already
Category theory is rarely useful by itself, but it can be a mental scaffold when designing things like query languages. Microsoft's LINQ dsl within C# used category theory ideas to ensure consistency. That said, the applicability surface area in practice is typically quite limited in my experience. It's like formal methods -- elegant in practice, but a good problem fit is often rare. It's like writing a LEAN proof for your web app -- rarely needed, but if your web app needs a high degree of correctness, then indispensable.
This is John D Cook's take:
Category theory can be very useful, but you don’t use it the same way you use other kinds of math. You can apply optimization theory, for example, by noticing that a problem has a certain form, and therefore a certain algorithm will converge to a solution. Applications of category theory are usually more subtle. You’re not likely to quote some theorem from category theory that finishes off a problem the way the selecting an optimization algorithm does.
I had been skeptical of applications of category theory, and to some extent I still am. Many reported applications of category theory aren’t that applied, and they’re not so much applications as post hoc glosses. At the same time, I’ve seen real applications of categories, such as the design of LINQ mentioned above. I’ve been a part of projects where we used category theory to guide mathematical modeling and software development. Category theory can spot inconsistencies and errors similar to the way dimensional analysis does in engineering, or type checking in software development. It can help you ask the right questions. It can guide you to including the right things, and leaving the right things out. [1]
You don't need category theory to describe the Result type. But the people who first introduced it to programming languages, were thing about category theory.
IMHO, the EU needs to become more proactive when it comes to tech markets. Trying to find new ways to tax US tech megacorps is not going to cut it.
1. Create competitive low tax regimes for EU based tech companies, both for investors and employees.
2. Forbid buyouts by US tech companies.
3. Become more protective of tech markets in general. For example why the fuck does AirBnB or Uber get to operate in Europe? What is there to gain? We have our own alternatives.
4. Give preferential treatment to EU based tech companies. For example for government related contracts, why the fuck are EU governments depending on Microsoft/Google/Amazon?
5. Prioritize tech over other lower growth industries. Yes selling petrol cars was good business 30 years ago.
I think it's really clear what europe should do and it's been for a while. Now insert politics and you get governments mostly captured by US influence thanks to propaganda, lobby, deep ties and european upper class invested in US companies.
Any tech focus won't make a dent unless europe gets out of the US grip. And to do that would require huge changes in politics. So far we are seeing turn to far right which is even more tied to US (and US tech) than the current center right heading europe.
We will see. The digital tech moat US might be quite shallow. Maybe in hardware but software in europe is pretty great and most of the moat are network effects. Network effects can be dissolved much easier and faster than hard tech. But it's never been a fair fight.
Yes, unfortunately it's very easy to buy our bureaucrats both at national and EU level. Heck most of them style themselves to be Americans! You make very good points.
We are building general thinking machines with the aim of replacing all human labour, ... but humans won't be replaced, they will find other jobs, because when we introduced tractors they were able to find other jobs, ... totally the same scenario.
I love the cognitive dissonance.
Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
> We are building general thinking machines with the aim of replacing all human labour, ... but humans won't be replaced, they will find other jobs, because when we introduced tractors they were able to find other jobs, ... totally the same scenario.
Technically, there's no cognitive dissonance in the statement you made, at least with the way you worded it. Thinking machines can only do thinking labor (for now), so the bright future ahead is one where mental work is reserved for the elite, while everyone else does hard, physical work in places that are too messy for the machines to operate in at the moment.
> when we introduced tractors they were able to find other jobs
Coincidentally, I am reading Grapes of Wrath. Chapter 5 is my favourite, and it's about how the big banks tractor people off the land. The whole damn book is as relevant as ever, but this chapter just sticks with you.
Two example scenarios described by Kurzweil in Singularity is Near: super intelligence augmenting human intelligence via direct brain interface (humans vs AI goes back to intelligence vs intelligence as usual), or, we get to live like very very pampered and worshiped cats.
> Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
Read some of the Culture novels by Iain. M. Banks.
I mean, there is much more to life than work... so let's not pretend it's all about working.
Everyone in America is now fed and most children grow up spending a ton of time with both parents. This is because of automation greatly raising productivity and bringing costs down throughout the 20th century.
It's easy to think things are terrible, but they are actually insanely good. Just 100 years ago life was horrible for basically everyone by today's standards, now it's not.
AI will continue the trend, raise productivity and bring costs down. Now it's for white collar output, instead of manufacturing and agriculture.
The labor force disruption will be painful, as it always is, especially in a country without a strong social safety net, but things will be better on the other side because we just made a ton of work more efficient and can produce more with less.
We shouldn't throw the baby out with the bath water just because it affects us this time...
Technology has been replacing manual and mental labor for millennia, and especially in the last 150 years. A farmer or accountant from 1875 would be utterly shocked by how much we depend on machines and the social and industrial instituitions they enable.
And all the benefits that brings. Not just in raw economic terms, but in quality of (family, community, recreational, commercial, ecological, medical) life.
Kind hard to imagine it will suck if another order-of-magnitude leap along that long line happens.
> Technology has been replacing manual and mental labor for millennia
The difference this time is that no one can articulate what are these "new" jobs that people will find. When agricultural jobs were being decimated, factories were opening up (whether they were better jobs or not is a different discussion, but the point being that the technology opened up new opportunities while destroying the old ones. We do not see this with AI and I have yet to read even any reasonable speculation of what these "new opportunities" might be. Sure you could argue that the future is unknown, but we should be able to at least glimpse it. And yet, we can't. Because almost any "new job" that you can come up with that doesn't exist today (which is already hard to imagine), could ostensibly also be replaced with AI.
So all we have is comments like yours, vague "it worked before so it'll work again" (lets ignore the fact that the circumstances are completely different), or even worse, "people will have time to focus on things that matter" but no explanation of how they'll pay the rent and buy food to survive.
> all the benefits ... raw economic terms ... quality of (family, community, recreational, commercial, ecological, medical) life
In what way is AI improving any of these? So far, it's making all of these worse. Productivity increases don't matter if they don't benefit more than just a few wealthy shareholders.
A farmer or accountant from 1875 would be utterly shocked by how much we depend on machines and the social and industrial instituitions they enable.
A bit of a tangential anecdote from my dad, who is a retired a biologist. He was one of the first in the department to use a computer in the 1970s and wrote some programs to do tedious calculations that had to be done by hand before and took days of human labor. Even a 1970s computer could finish the calculations with his programs in a few minutes.
His boss, an older tenured professor, could not believe that 'these damn computers' can possibly be right. Doing the same calculations in a few minutes? Impossible. So for a few weeks (or months, I forget), he did all the calculations done on the computer by hand to prove that the computer must be wrong.
One day he comes to my dad and says "can you show me how to use one of these computers?"
If you can't see the difference between prior technological jumps and this current jump, you are part of the problem.
The world is changing quickly. Our most coveted defining traits - our minds - are under attack. This is a technology that seeks to replicate your thought processes and critical thinking and then to execute it at machine speeds.
If you think this is like the industrial revolution, you're actually right. We're still replacing animals with machines. But now we are the animals.
Anything other than a serious discussion about UBI or a post-labour economy is a joke. This is technology that aims to displace most of us.
The motorized tractor and other agricultural technologies aimed and did, in fact, “displace most of us” once upon a time. And now, because I’m not a farmer, I get to spend much more time with my family, in recreational pursuits, sleeping, …
> And now, because I’m not a farmer, I get to spend much more time with my family, in recreational pursuits, sleeping, …
You'll have even more time with your family when you are no longer a SWE, e.g.
When automation displaced farmer manual labour, it also led to new jobs opening up for that labour to flow into.
What new jobs/fields do you see developing out of AI tools and how they've been marketed so far?
Every step of automation across the history of humanity has led to a "concentration of power" in jobs/fields which required brainpower. AI is the technology coming for brainpower. Where do we go from there? Back to farming?
And when I say AI is coming for the brainpower, it's coming for it in two ways: directly where it takes our jobs and indirectly where a lot of people using it are seemingly getting dumber. Both are quite dangerous to our combined futures.
And remember if there aren't jobs, people probably won't just lay down and die.
It won't be Marvin saying, "Oh god I'm so depressed, what's the point?" We'll just start killing each other in massive numbers cause, well, if you can't create anything and there isn't enough for everyone, what else is there to do but fight over what there is
But that's the thing, and what's really different from how it's ever been before: there absolutely is enough for everyone.
It's being deliberately gatekept from us by the wealthy, and by those who believe that no one should be allowed to have anything they haven't "earned".
The tragic thing is, to the extent that you're right, people will probably mostly kill other people who have nothing, rather than turning their anger and violence where it truly deserves to go: the rich bastards who want to own everything and prevent the rest of us from having anything.
There isn't though. Our infrastructure cannot sustain the AI race. Food supplies are weakening. An ever increasing population is being encouraged by billionaires.
There could be! But there currently is not. Nor is there any plan for that to change.
Well, you're right that our infrastructure can't sustain the AI race, but (while it's true I didn't make that clear), that's not remotely what I was talking about.
Even with food supplies "weakening"—which is only happening due to the pointless Iran war, not due to any larger trends—we still have plenty of food to feed every human being on the planet.
And regardless of what billionaires might encourage, population growth is slowing. (To the extent that it might become a genuine economic problem in a few decades if we don't find a way to adjust our economic systems to stop depending on a constantly-increasing population.)
Tarrifs wiped out a lot of farmers. Fertilizer shortages absolutely are a result of the Iran war.
Population growth can occur if billionaires lobby Republicans and make birth control illegal. Which is happening right now. Last week we were seeing some scary news about that.
There is a big difference between what we could be doing and what is happening. It's more profitable for the ultra wealthy that we can barely survive. I know it sounds abrasive, but it's a fact and there is a lot of evidence pointing to this. Googling Elon musk wants people to have more children is scary how many hits there are from different conversations. Bezos as well.
"We have enough for everyone, and it is being deliberately gatekept by the wealthy" does not logically equate to "therefore if we removed that gatekeeping we'd have Stalinism" without a whole bunch of extra steps not currently in evidence.
For instance, there's no reason why we can't say "sell your food for lower prices than the maximum the market will bear; the government will pay you to make sure you're still solvent" (this is called subsidies, we do it all the time). We also have a lot of antitrust levers at our disposal that have been growing rusty from disuse since Reagan's administration decided that monopolies were Good For Us, Actually.
I'm not proposing any specific solutions to this problem; I'm merely stating that it is a problem. If anything's "fucking insane" here, it's responding to that with ad hominem attacks and some very thick sarcasm about communism.
Furthermore, "communism" as an economic philosophy does not inherently lead to the specific circumstances that we saw in the Soviet Union during the 20th century. I know people tend to start crying "uh-uh-uhh! No True Scotsman fallacy!" when one points out that the USSR was not, in fact, doing a great job at communism, but it really wasn't. At first they tried, but it very quickly descended into a fairly standard authoritarian regime where economic decisions were being made not based on what was best for the people, but on what was best for the dictators and their cronies. (And funny enough, those same people don't seem to have a problem with the idea that the Democratic People's Republic of Korea is not, in fact, particularly democratic...)
I just don't buy that "now there's enough for everyone" as opposed to every other point in history where... contention for resources was pretty much the same. Go as far back as you want and you'll find records of war and strife, even though the planet's resources were limitless compared to the consumptive pressure they're under now.
People think that being the first soldier over the wall in a siege, veritably a death wish, was a punishment reserved for criminals or something, but actually it was a mechanism of social mobility in an ancient society. An individual who breached the enemy's defenses was honored and paid richly.
You can observe the dynamics just as readily in modern society, watching a show like Survivor which puts small-group social politics front and center. Often the players of the game wish that everyone would just go along with the plan. Sometimes it does happen, but just as often a few people see a better opportunity for themselves by breaking with the popular consensus. It doesn't even have to be the same people every time. It's just always better for some people to try going a different way: making their own alliance or their own city or country. Both the drive towards individuality and its results, strife and resilience, are necessarily eternal in the cycle of life.
If you want to be glib, there is a light and a dark side to the force.
It's more that women are the less expendable gender. If you send the women to die on the front lines, who is going to birth the next generation to replenish your population?
I really like the game as a concept, and I got pretty far already, level 2.16, but I feel like some levels aren't making much sense at all in terms of the goal and how the passing circuit fulfills the objective. The Latch and some others had me puzzled. Because it is clearly vibecoded I feel like I cannot really trust the educational material all that much and at times it's not clear if I am misundestanding something or if is just nonsensical LLM slop.
Chomsky spent the latter half of his career decrying the capitalists and telling us that we should be suspcious of them. It certainly shows that he didn't walk the walk.
I don't think Chomsky's relationship with Epstein is in any way defensible, but I've seen similar comments to yours all over the interwebs and I'm confused by them. Chomsky never decried capitalists or told us to be suspicious of them on a personal level. Or at least, not in any of his political work that I've ever read. He was anti-capitalist, but he didn't have a simplistic view of the world where individual capitalists were inherently evil.
We have a pretty good idea. Bitcoin holders and miners promote it. Punters buy because it's going up until it peaks and then they sell because it's going down.
The a value is approx <amount average person invests in bitcoin> x <number of people> / <number of coins (~16 mil)>
It goes up over the cycles as the number of people in it goes 1 mil, 10, 100, 1000 etc.
I agree with you that STEM don't hold a monopoly on intelligence.
> engineers that can't get laid in Singapore are the same "brilliant" engineers that can't get laid in their home town
Maybe so, but not for the same reasons. Back in their home town, they cannot vibe with anyone because the few who might be compatible have long since left. In a STEM hotspot, they go to an event and meet compatible people, but it's 11 guys for every 3 girls, so unless they are top dog in that room, they aren't going to score.
IDK the dating scene in Singapore. I frankly didn't even know that Singapore was considered a tech hub. I was using it as a synonym for a tech hub because that is what I assumed the author was doing.
> Nothing worse than being famished and getting one measly slice of pizza.
I am not exactly a big guy but even I can easily eat two slices of pizza and I am talking about real slices of the Costco pizza which I love for its value for money. I can't imagine how you could feed a team of eight with a single pizza.
reply