Hacker Newsnew | past | comments | ask | show | jobs | submit | ux266478's commentslogin

Money is a means to an end, so that's true. Just because you have a screwdriver does not mean you will drive screws. You have to have someone who can use the screwdriver, knows righty-tighty, wants to drive the screws, etc. Stupidly throwing money at a problem can get you places, but the efficiency can also drop to near zero. The problem is we're talking about a quantity of money where you don't need to be highly efficient. Savants can do pioneering work in a cave with a box of scraps, but you don't have to strive for that kind of austere efficiency. Nobody is expecting that.

If Apple can't harness the potential of the currently overfilled labor pool, that indicates a systemic issue within Apple. The entire raison d'etre of management structures within a business is to increase efficiency of capital to drive productive forces. If they cannot do that, then that would indicate an extremely problematic competency crisis within Apple's management organ.

This kind of failure when you are a company with the valuation of a first world country's GDP should be raising alarm bells in any rational person's mind.


mmmmmm, no it would seem like they are absolutely under a social obligation to not do that.

> and showed how computations could be expressed in Haskell with the “do notation” as an example

To be clear, do notation is new special syntax that was added to make monads more ergonomic. Traditionally you used >> or >>=, which looks a lot more like closures.


Within most languages, you're operating at a semantic level where much of the "point" is already obviated for you. They deal with fundamental structure that you take completely for granted, and you use it all implicitly. A monad is very simple at the core of it, it's an ordered collection, flattened into a single context. What you're collecting, what that ordering means, what that context is, etc. define what the monad is used for.

You could do IO? IO requires temporal ordering. Take for instance:

    print("Hello ")
    print("World!\n")
Would obviously result in:

    Hello World!
But would it? You are implicitly assuming that the first line will be evaluated and print before the second. It's a reasonable assumption to make, most programming languages embed that in their execution semantics. What if I told you that the assumption isn't actually guaranteed? What if we didn't give that temporal ordering in the same way? What if for instance, a function could return a result without evaluating its arguments? This is called non-strict evaluation (note: this does not necessarily mean lazy evaluation). In the case of a non-strict language, you would need some way to tell the program that the first line should happen before the second before you can do any kind of IO. For a strict language, the IO monad doesn't make sense because you don't need to tell the program that.

Haskell is almost like a metalanguage. You're describing a program, but it's not like describing a program in Python or Scheme. You are expressing a program in graph reduction, and that's very different compared to how you're used to thinking of computer programs. That's the practical reason why Haskell has the IO and State monads, because they reify as a temporal grounding for instructions. Your program has a completely different concept of flow than in the real world, and these are tools you have to bridge that gap. It's important to note, this is just a very specific usecase of monads.

If you find treatment to be shallow, it's probably because you're looking for answers in shallow contexts. I used to be as confused as you, and the answer I eventually discovered is because I was ignorant of my own ignorance. I needed a healthy dose of computational philosophy to broach the subject. As someone else has said, once you understand it, it can be hard to explain it to someone who doesn't understand it. It's not a short topic to be learned in a series of twitter posts or a blog. It's something you come to understand after a lot of exposure and study and careful rumination. And of course, primary sources.


> Actually, in my opinion, Scheme (and Lisp) allows you to express complex systems and problem domains in more simple terms than any other language can.

I disagree with this. Lisps are procedural languages like most other programming languages. They can describe procedures in a relatively simple manner (big asterisk on that one, the simplicity of Lisp as a language is greatly overstated and conflated with the simplicity of s-expressions), but procedures aren't really good at expressing complex systems or problem domains. In my experience, relational languages like Prolog are vastly better at this. Lisps are much easier for the average programmer to pick up and write reasonably performant code with, though.


Lisps are expression based languages, but not pure. It's easy to mistake it as "like most other languages", but it's not quite the same - everything is an expression and returns a result. There are no "statements".

They appear procedural because of syntax sugar - ie, the body of a function is basically implicitly wrapped in (progn ...), (begin ...), ($sequence ...), etc - which are all equivalent expression forms which evaluate their sub-expressions in order and return the result of the last one.

   (progn a b c)      ;; CommonLisp
   (begin a b c)      ;; Scheme
   ($sequence a b c)  ;; Kernel

   ;; evaluate a, then b, then c, 
   ;; ignore the results of evaluating a and b 
   ;; return the result of evaluating c.
So when you see:

   (define (foo)
       (expr1)
       (expr2)
       (expr3))
If we desugar, it would be

    (define foo (lambda () (begin (expr1) (expr3) (expr3))))
We get behavior that looks just like other procedural languages (without a "return" keyword) - but everything is still an expression.

A similarity is the comma operator in C. Imagine you didn't write statements but the body of your C functions was entirely chains of comma operators.

CommonLisp has a couple of other useful related forms - prog1 and prog2. They still evaluate their sub-expressions in order, but prog1 returns the result of evaluating the first expression, and prog2 returns the result of the second expression.

    (define (foo) (prog1 (expr1) (expr2) (expr3))
    (foo)
 
    ;; evaluates expr1, then expr2, then expr3
    ;; returns the result of evaluating expr1
    ;; ignores the results of evaluating expr2 and expr3

I'm very familiar with Lisp, you don't have to explain it to me. I think you are mistaken as to the meaning of what is meant by "procedural language" here. It's simply the mode of computation where a program is directly conceived as a hierarchical sequence of steps. I think you got caught up in the idea of a grand disjunction of "expressions" and "statements", with a distinguishing feature involving return values, and so on. But no, that's not particularly relevant here (nor is it universally true, or applicable)

To simplify it, you can consider cons, car, cdr the beating heart of Lisp. These special forms directly encode the execution semantics as the traversal of a head over cells of a tape. Lisp belongs to the same family as the Turing machine, and that's a very big family. SML also belongs to this family. The overwhelming majority of programming languages belong to this family.


I think it all depends on the shape of the problem. I love Prolog for its expressive power, sometimes. Complex simulation problems are really nice to model in OCaml or CLOS, and then again, maybe remodelling in Prolog brings some insights. And often writing a recursive function in Lisp is all you need to understand a complex system. It's all layers. An outer shell to prolog would be a theorem solver for example, because Prolog is a very rudimentary one.

> An outer shell to prolog would be a theorem solver for example, because Prolog is a very rudimentary one.

I think it's the wrong way to look at it. It's not that Prolog is a rudimentary theorem solver, it's that theorem provers are a specialized use-case of deductive proofs, so a computational foundation of FOL makes them trivial to write as programs. A pile of bricks and a jar of mortar isn't a rudimentary house, so to speak.


Prolog is great for the cases where logic programming makes sense but you can easily create a logic engine in Lisp (or other languages - that's what kanren/minikanren which has been ported to many languages)

Of course, it holds for all turing complete languages, that's what computational class is all about. It's not a good argument to use or not use something though, because in practice it's always worse. You can think of it like this, would you rather write programs with SBCL or some company's internal unnamed domain-specific tree-walking sexpr language that doesn't even have macros, TCO or a real REPL and possibly isn't even turing complete? Would you ever in a million years suggest that the latter is even half as good as proper CL, let alone a replacement? I sure wouldn't.

Exactly, clojure.core.logic is an example Minikanren inspired logic engine.

Haskell's syntax comes from ISWIM, which was motivated quite a lot by m-expressions.

How much work are you putting into simplicity? In my experience, in order for software to be permanent it needs to be like mold: only a single spore is required to grow a massive fruiting body and the spores themselves are very small and very uncomplicated. In this case, a spore is a single developer, and the simplicity is a low skill ceiling. Reproducibility does not benefit longetivity if the preconditions themselves themselves are highly complicated, and the benefit of simple bootstrapping is easily overshadowed if the software itself isn't friendly to being extensively hacked on by the average programmer.

I've written about this: https://anirudh.fi/future

there's something about new VC fundedbro narcissism that's so fascinating

> GitHub? Where do we even begin…

The problem with GitHub is neither its UX nor its functions. Its downfall is VC funding but you made sure to only copy that and none of the good things.

> GitLab? Way too enterprise-y, and definitely not easy to self-host.

The only reason you don't offer an enterprise version yet is because atproto sucks and there's no way to make it private. Do you honestly think VCs are paying you to play with your strings and sheep? Your users won't pay for anything because there are already free alternatives that don't force them to join yet another cult. "Why should I join tangled? uhmmm it's like a worse version of everything but it has atproto! you like atproto don't you, 14 year old well established project will millions of users?"

> Sourcehut? So opinionated it alienates about 98% of potential contributors. Pretty great if you really love email, I guess.

Do you hear youself? In what world is tangled not extremely opinionated that alienates everyone but hardcore atproto followers? "pretty great if you really love atproto i guess".

> Forgejo/Gitea? Nice, sure. You can self-host—but without a shared identity, I still need to create an account on your instance just to send a PR.

It also works and is widely used and battle tested. Has a familiar UI and CI. Oh and apparently this newfound concept called private repos.

> Radicle? Honestly, it’s amazing. Purely technically, Radicle is far ahead of anything else, Tangled included. But the world—at present—just isn’t ready for full-on P2P.

The world is ready for appview + pds + did + ... yeah okay. Only hardcore atproto fans wants this bs.


While forcefully stated, these criticisms are on point, especially given the lack of answers by Tangled on monetisation.

i'd like to point out that C++26's reflection is the third reflection standard defined by the language to cover the "niche corner cases" that have hereto been lacking from the other two (RTTI and type traits). this specific "niche corner case" also would not exist if C++ did not commit to a poorly-bolted on feature that turned out to be accidentally powerful, and instead intentionally designed powerful metaprogramming facilities from day 1.

there's a point at which "pragmatism" starts being anything but, and it was around C++11 give or take a standard. how on earth do you use it day to day and not feel the schizophrenic non-design being a generalized property across the whole language?


The set theorists decided that mathematics is the overarching superdomain over all study of structure. You don't get to pick and choose. Either mathematics is a suburb of logic and these two things are separate, or they're not and ZFC dogmatics need to accept they don't have a monopoly on math.

I of course fully support reinstating logicism, but the same dogmatics love putting up a fight over that as well.


I think the most surprising thing I've learned taking formal math in college is just how much mathematicians are pragmatists (at least for my teacher with sample size n=1). They're much more interested in new ways to think about ideas, with a side effect of proofs for correctness. The proof is more about explaining why something works, not that it does.

I'm going to take a formal logic class in the fall, and my professor said something akin to "definitely take it if you're interested, just be aware that it probably won't come in use in most of the mathematics done today." The thing is the foundations are mostly laid, and people are interested in using said foundations for interesting things, not for constantly revisiting the foundations.

I think this is one reason most mathematicians don't see a need for formal proof assistants, since from their perspective it's one very small part of math, and not the interesting one.

This is not to say that proof assistants are a dead end—I find them fascinating and hope they continue to grow—but there's a reason that they haven't had a ton of traction.


I think that's a good way of putting it. I would addend that most people working in mathematics aren't generalists, their primary interest isn't in a broad picture. Rather, most are hyperfocused into a single domain with a strong backbone of reflexive intuition built up. By virtue of sheer human limitation, there's only so much someone can care about what's happening outside of their world while still making serious contributions within it. This doesn't even just extend all the way to shifting foundations, but number theorists can hardly be expected to keep up with the forefront of graph theory, for example.

For the pragmatists Logic as a field commits the immortal sin: it blasphemes the intuition that mathematicians spend years honing by obliterating it. Not just for a singular domain, but for all domains. Of course, that doesn't really explain the whole picture. Formalism built a holy walled city. Logicians, by nature of their work, leave the safety of the walled city to survey, exploit and die in the tangled jungle outside. Some don't even speak the holy language of the glorious walled city, they talk in absolutely gibberish modalities and hyperstructures. There is a political tension held against logic and logicians as a result.


Mathematicians use logic to talk about the mathematical world. But logic is not the world.

Not even the most dogmatic of the set theorists ever argued mathematics was possible without reason, however. For mathematics, logic is the world, as the copula makes no distinction between substance and existence. In the same sense that the earth is not matter itself, but it is a material thing.

Putting that aside, to make things more clear: computer science is mathematics. Computer scientists are mathematicians. That was a categorization decided long before you and I ever lived. In the sense that you mean, you're only referring to a very small fraction of what "mathematics" refers to In the true sense of the word. It is just as irreconcilably disjointed as Logic is, not unified and fundamentally non-unifiable.

I too think it would be better if "mathematics" was reserved for the gated suburb of ZFC. But that's not the world we live in, courtesy of the same people who pushed ZFC as a foundation to begin with.


> For mathematics, logic is the world, as the copula makes no distinction between substance and existence.

No. There are truths about the subject not captured in any single formal system. Which is why objects are studied form many perspectives.

> Computer scientists are mathematicians.

Some are and some aren’t.


Classical logic has plenty of limitations/roadblocks, all logics do. Logic isn't a unified domain, but an infinite beach of structural shards, each providing a unique lens of study.

Classical logic was rejected in computer science because the non-constructive nature made it inappropriate for an ostensibly constructive domain. Theoretical mathematics has plenty of uses to prove existences and then do nothing with the relevant object. A computer, generally, is more interested in performing operations over objects, which requires more than proving the object exists. Additionally, while you can implement evaluation of classical logic with a machine, it's extremely unwieldy and inefficient, and allows for a level of non-rigor that proves to be a massive footgun.


Classical logic isn’t rejected in computer science. Computer science papers don’t generally care if their proofs are non-constructive, just like in mathematics.

This entire thread is making clear that constructivists want to speak on behalf of everyone while in the real word the vast majority of mathematicians or logicians don’t belong to their niche school of mathematics/philosophy.

Do you understand the irony in posting this on a comment chain ostensibly rejecting foundational objectivism?

Not everyone has to subscribe to your philosophy, weird I know.

They care quite a bit actually, they just call their constructive proofs "algorithms" or "decision procedures".

Intuitionism is just disallowing the law of the excluded middle (that propositions are either true or they are not true). Disallowing non-constructive proofs is a related system to intuitionism called “constructivism”. There are rigorous formulations of mathematics that are constructive, intuitionist or even strict finitist.

What point are you responding to?

THe parent of my post referred to disallowing non-constructive proofs, which is not a feature of intuitionist logic but of constructivism.

But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction. Talking about oracles is useful in turn since it's a very general way of talking about side-conditions that might make something easier to construct.

Of course. Though it's also important to note: whether or not an object exists is dependent on the logic being utilized itself, which is to say nothing of how even if the object holds some structural equivalent in the given logic of attention, it might not have all provable structure shared between the two, and that's before we get into how the chosen axioms on top of the logical system also mutate all of this.

It's not that classical logic is useless, it's just that it's not particularly appropriate to choose as the basis for a system built on algorithms. This goes both ways. Set theory was taken as the foundation of arithmetic, et al. because type theory was simply too unwieldy for human beings scrawling algebras on blackboards.


I am absolutely not even close to being an expert on the topic, but type theory wasn't all that well understood even relatively recently - Voevodsky coined the Univalence axiom in 2009 or so, while sets have been used for centuries.

So not sure it would be "unwieldy", it's a remarkably simple addition and it may avoid some of the pain points with sets? But again, not even a mathematician.


Set theory was chosen because it was a compatively simple proof of concept. You don't really refer to the foundation when scrawling algebra on a blackboard the way you would with a proof assistant, and this actually causes all sorts of issues down the line (it's a key motivation for things like HoTT).

> But proving the object exists is still useful, of course: it effectively means you can assume an oracle that constructs this object without hitting any contradiction.

I don’t think that logic holds in mainstream mathematics (it will hold in constructive mathematics by definition, and may hold in slightly more powerful philosophies op mathematics) because there, we can prove the existence of many functions and numbers that aren’t computable.


The whole point of oracles is to posit the ability to compute things that aren't (in the general case) computable.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: