raesene9 2 days ago

Interesting article. There is always risk that a new hot technique will get more attention that it ultimately warrants.

For me the key quote in the article is

"Most scientists aren’t trying to mislead anyone, but because they face strong incentives to present favorable results, there’s still a risk that you’ll be misled."

Understanding people's incentives is often very useful when you're looking at what they're saying.

  • ktallett 2 days ago

    There are those who have realised they can make a lot of cash from it and also get funding by using the term AI. But at the end of the day what software doesn't have some machine learning built in. It's nothing new, nor is the current implementations particularly extraordinary or accurate.

    • asdff a day ago

      Plenty of software has zero ML. But either way not all ML is the same. There are many different algorithms each with their own tradeoffs. AI as it is presently marketed however usually means one type of AI, large language model, which also has tradeoffs, and is a bit new to the scene compared to say markov chains whose history starts in the early 1900s.

    • overfeed 17 hours ago

      AI is a fuzzy term, and a moving target. Expert Systems have zero ML and were considered cutting-edge AI once upon a time.

rhubarbtree 2 days ago

I think this is mostly just a repeat of the problems of academia - no longer truth-seeking, instead focused on citations and careerism. AI is just a.n.other topic where that is happening.

  • geremiiah 2 days ago

    I don't want to generalize because I do not know how widespread this pattern is, but my job has me hopping between a few HPC centers around Germany, and a pattern I notice is that, a lot of these places are chuck full of reject physicists, and a lot of the AI funding that gets distributed gets gobbled up by these people and the consequence of which is a lot of these ML4Science projects. I personally think it is a bit of a shame, because HPC centers are not there to only serve physicists, and especially with AI funding we in Germany should be doing more AI-core research.

    • ktallett 2 days ago

      HPCs are usually in Collab with universities for specific science research. Using up their resources is hopping on the bandwagon to damage another industry.an industry (AI) which is neither new nor anywhere close to being anything more than an personal assistant at the moment. Not even a great one at that.

    • shusaku a day ago

      > a pattern I notice is that, a lot of these places are chuck full of reject physicists

      Utter nonsense, these are some of the smartest people in the world who do incredibly valuable science.

  • const_cast a day ago

    To be fair, the problems of careerism is really a side-effect of academia becoming more enthralled with the private sector, and therefore inheriting it's problems.

    If there's one thing working as a software dev has taught me, it's that all decisions are made from a careerist, selfish perspective. Nobody cares what's best, they care what's most impressive and what will personally get them ahead. After it's done, it's not their problem. And nobody can blame them either. This mindset is so pervasive that if you don't subscribe to it, you're a sucker. Because other people will, and they'll just out-lap you. So the result is the same except now you're worse off.

    • rhubarbtree 15 hours ago

      Well, the good news - and I think from the sound of your post you will take it as good news, because you care - is that you are not correct.

      Some careers are vocations, and in vocations people work less for egoist reasons and more from the desire to help people. Fortunately in the UK, we still have a very strong example of a vocation - nursing. I know many nurses, none of them can be described as careerist or selfish. So to begin, we know that your statement doesn’t hold true. Nurses’ pay is appalling and career paths are limited, so I’m confident that these many datapoints generalise.

      The obvious next question is why academia is not a vocation. You say it’s because it has become too like the private sector. Well, I can tell you that is also wrong, as I have spent many years in both sectors, and the private sector is much less selfish and careerist. This is surprising at first, but I think it’s about incentives.

      In the private sector very few people are in direct competition with each other, and it is rarely a zero sum game. The extreme of this is startups, where founders will go to great lengths to help each other. Probably the only area their interests are not aligned is in recruitment, but it is so rare for them to be recruiting the same type of person at exactly the same time that this isn’t really an issue. There are direct competitors of course, but that situation is so exceptional as to be easily ignored.

      In academia, however, the incentives encourage selfishness, competition, obstruction, and most of all vicious politics. Academics are not paid well, and mostly compete for egoist rewards such as professorships. I believe in the past this was always somewhat a problem, but it has been exacerbated by multiple factors: (a) very intelligent people mostly left, because more money could be made in finance and tech, and thus little progress can be made and there is no status resulting from genuine science, (b) governments have used research assessment exercises, nonsense bureaucracy invented by fools that encourages silly gaming of stats rather than doing real work, (c) a system of reinforcement where selfish egotists rise at the expense of real scientists, and then - consciously or not - reinforce the system they gamed, thinking it helped them up the ladder and thus must be a good system. The bad drive out the good.

      Ultimately the problem is academia is now filled with politicians pretending to be scientists, and such institutional failure is I think a one way street. The only way to fix it is to create new institutions and protect them from infiltration by today’s “scientists”.

      This is of course a generalisation, and there are some good eggs left, just not many. Most of them eventually realise they’re surrounded by egoist politicians and eventually leave.

    • ethbr1 21 hours ago

      The follow on from this is that any structure one wants to persist through time had better rest maximally on people acting in their own self interest.

      • frickinLasers 5 hours ago

        This doesn't seem possible, because self interest will always lead to hacking the structure for better returns, and technology accelerates the ability to do that. It seems to me that whatever is put in place to direct selfish behavior toward good will eventually be rerouted or broken for one exceptionally selfish asshole or group.

        • ethbr1 4 hours ago

          It's not a binary classification.

          Some structures are more resistant, some less.

          Some are self-correcting, some not.

          The biggest design feature is usually requiring energy to be burnt to hack the desired outcome. At some point it's more effort than benefit.

  • barrenko 2 days ago

    Seriously don't understand what "no longer" does here.

angry_moose a day ago

I've been "lucky" enough to get to trial some AI FEM-like structural solvers.

At best, they're sortof ok for linear, small deformation problems. The kind of models where we could get an exact solution in ~5 minutes vs a fairly sloppy solution in ~30 seconds. Start throwing anything non-linear in and they just fall apart.

Maybe enough to do some very high-level concept selection but even that isn't great. I'm reasonably convinced some of them are just "curvature detectors" - make anything straight blue, anything with high curvature red, and interpolate everything else.

  • amelius a day ago

    Could you use these models as a preconditioner in an iterative solver?

    • angry_moose 5 hours ago

      I don't see any reason its not theoretically possible but I doubt it would be that beneficial.

      You'd have to map the results back onto the traditional model which has overhead; and using shaky results as a precondition is going to negate a lot of the benefits, especially if its (incorrectly) predicting the part is already in the non-linear stress range which I've seen before. Force balances are all over the place as well (if they even bother to predict them at all, which its not always clear) so it could even be starting from a very unstable point.

      Its relatively trivial to just use the native solution from a linear solution as the starting point instead, which is basically what is done anyway with auto time stepping.

  • xeonmc a day ago

    So it’s more like a “second principles” solver, it cannot synthesize anything that it hadn’t already seen before.

jxjnskkzxxhx 6 hours ago

> AI adoption is exploding among scientists less because it benefits science and more because it benefits the scientists themselves.

This is true in so many aspects of human life - anyone trying to run an organisation should be aware of it.

nicoco 2 days ago

I am not a AI booster at all, but the fact that negative results are not published and that everyone is overselling their stuff in research papers is unfortunately not limited to AI. This is just a consequence of the way scientists are evaluated and of the scientific publishing industry, which basically suffers from the same shit than traditional media does (craving for audience).

Anyway, winter is coming, innit?

  • moravak1984 2 days ago

    Sure, it's not. But often on AI papers one sees remarks that actually mean: "...and if you throw in one zillion GPUs and make them run until the end of time you get {magic_benchmark}". Or "if you evaluate this very smart algo in our super-secret, real-life dataset that we claim is available on request, but we'd ghost you if you dare to ask, then you will see this chart that shows how smart we are".

    Sure, it is often flag-planting, but when these papers come from big corps, you cannot "just ignore them and keep on" even when there are obvious flaws/issues.

    It's a race over resources, as a (former) researcher on a low-budget university, we just cannot compete. We are coerced to believe whatever figure is passed on in the literature as "benchmark", without possibility of replication.

    • aleph_minus_one a day ago

      > It's a race over resources, as a (former) researcher on a low-budget university, we just cannot compete. We are coerced to believe whatever figure is passed on in the literature as "benchmark", without possibility of replication.

      The central purpose of university research has basically always been that researchers work on hard, foundational topics that are more long-term so that industry is hardly willing to do them. On the other hand, these topics are very important, that is why the respective country is willing to finance this foundational research.

      Thus, if you are at a university, once your research topic becomes an arms race with industry, you simply work either at the wrong place (university instead of industry) or on a "wrong" topic in the respective research area (look for some much more long-term, experimental topics that, if you are right, might change the whole research area in, say, 15 years, instead of some high resource-intensive, minor improvements to existing models).

    • nicoco 2 days ago

      I agree with that. Classically used "AI benchmarks" need to be questioned. In my field, these guys have dropped a bomb, and no one seem to care: https://hal.science/hal-04715638/document

      • baxtr 2 days ago

        Can you give brief summary why this paper is a breakthrough for an outsider of the field?

        • mzl 2 days ago

          Checking it shortly (I haven't seen the paper before) this seems to be a very good analysis of how results are reported specifically for medical imaging benchmarks.

          As is often the case with statistics, selecting just a single number to report (whatever that number is) will hide a lot of different behaviours. Here, they show that just using the mean is a bad way to report data as the confidence intervals (reconstructed by the methods in the paper in most cases) show that the models can't really be distinguished based on their mean.

          • amarcheschi 2 days ago

            Hell, I was asked to use confidence interval as well as average values for by bs thesis when doing ml benchmarks and scientist publishing results in medical fields aren't doing it...

            How can something like that happen? I mean, i had a supervisor tell me "add the confidence interval to the results as well", and explained me why. I guess that at nobody ever told them? Or they didn't care? Or it's just a honest mistake

            • stogot a day ago

              Is it because it’s word-of-mouth and not written down in some NSF (or other organization) guidance? Thiss seems to be the issue

              • amarcheschi a day ago

                That might be, but couldn't a paper be asked to include that to be published? It looks like an important information

        • nicoco a day ago

          I don't think it qualifies as a breakthrough. In short:

          1. Segmentation is a very classical in medical image processing. 2. Everyday there are papers claiming that they beat the state of the art 3. This paper says that most of the time, the state of the art has not been beat because they actually are in the margin of error.

  • asoneth a day ago

    I published my first papers a little over fifteen years ago on practical applications for AI before switching domains. Recently I've been sucked back in.

    I agree it's a problem across all of science, but AI seems to attract more than it's fair share of researchers seeking fame and fortune. Exaggerated claims and cherry-picking data seem even more extreme in my limited experience, and even responsible researchers end up exaggerating a bit to try and compete.

  • KurSix a day ago

    AI just happens to be the current hype magnet, so the cracks show more clearly

  • croes 2 days ago

    But AI makes it easier to write convincing looking papers

-__---____-ZXyw a day ago

Did the title get changed, or have I started hallucinating?

Title is:

"I got fooled by AI-for-science hype—here's what it taught me"

  • kjhughes a day ago

    It got changed (for the worse, in my opinion) away from the original title.

    The original title is supposed to be favored here unless it has a serious problem.

    This original title had no serious problem, unless accurately summarizing a PhD candidate's thoughtful critique of some questionable AI contributions to scientific research is a serious problem.

    • tanderson92 21 hours ago

      The present title is more friendly to VCs and the tech industry, shocking no one.

Flamentono2 2 days ago

I'm not sure why people on HN (of all places) are so divided regarding the perception of AI/ML.

I have not seen anything like it before. We literaly had not system or way of even doing things like code generation based on text input.

Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.

I could list tons of examples which are groundbreaking. The whole Image generation stack is completly new.

That blog article is fair enough, there is hype around this topic for sure, but alone for every researcher who needs to write code for their research, AI can make them already a lot more efficient.

But i do believe, that we have entered a new ara: An ara were we take data again very serious. A few years back, you said 'the internet doesn't forget' then we realized that yes the internet starts to forget. Google deleted pages, removed the cache feature and it felt like we stoped caring for data because we didn't knew what to do with it.

Then ai came along. And not only is now data king again but we are now in the mids of reinforcment ara: We now give feedback and the systems incorporate that feedback into their training/learning.

And the ai/ml topic is getting worked on on every single aspect of it: Hardware, Algorithm, use cases, data, tools, protocols, etc. We are in the middle of incorporating and building for and on it. This takes a little bit of time. Still the progress is crazy exhausting.

We will only see in a few years if there is a real ceiling. We do need more GPUs, bigger Datacenters to do a lot more experiments on AI architecture and algorithm. We have a clear bottleneck. Big companies train one big model for weeks and month.

  • whyowhy3484939 a day ago

    > Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.

    Thing is we just see that it's copy pasting stack overflow, but now in a fancy way so this is sounding like "I asked Google for a nearby restaurant and it found it in like 500ms, my C64 couldn't do that". It sounds impressive (and it is) because it sounds like "it learned about navigating in the real world and it can now solve everything related to that" but what it actually solved is "fancy lookup in a GIS database". It's useful, damn sure it is, but once the novelty wears off you start seeing it for what it is instead of what you imagine it is.

    Edit: to drive the point home.

    > claude just generated that

    What you think happened is AI is "thinking" and building a ontology over which it reasoned and came to the logical conclusion that this script was the right output. What actually happened is your input correlates to this output according to the trillion examples it saw. There is no ontology. There is no reasoning. There is nothing. Of course this is still impressive and useful as hell, but the novelty will wear off in time. The limitations are obvious by this point.

    • Flamentono2 a day ago

      I'm following LLMs, AI/ML for a few years now and not just on a high level.

      There is not a single system out there today which can do what claude can do.

      I stil see it for what it is: A technology i can communicate/use with natural language and get a very diverse of tasks done. From writing/generating code, to svgs, to emails, translation etc. etc. etc.

      Its a paradigma shift for the whole world literaly.

      We finally have a system which encodes not just basic things but high level concepts. And we humans are doing often enough something very similiar.

      And what limitations are obvious? Tell me? We have not reached any real ceiling yet. We are limited by GPU capacity or how many architectural experiments a researcher can run. We have plenty of work to do to cleanup the data set we use and have. We need to build more infrastructure, better software support etc.

      We have not even reached the phase were we all have local AI/ML chips build in.

      We don't even know yet how a system will act if everyone of us has access to very fast inferencing like you already get with groq.

      • nancyminusone a day ago

        LLMs are great at tasks that involve written language. If your task does not involve written language, they suck. That's the main limitation. No matter how hard you push, AI is not a 'do everything machine' which is how it's being hyped.

        • ranie93 a day ago

          Can "everything" be mapped to a written language task (i.e. described)?

      • lossolo a day ago

        > Its a paradigma shift for the whole world literaly.

        That's hyperbolic. I use LLMs daily. They speed up tasks you'd normally use Google for and can extrapolate existing code into other languages. They boost productivity for professionals, but it's not like the discovery of the steam engine or electricity.

        > And what limitations are obvious? Tell me? We have not reached any real ceiling yet.

        Scaling parameters is the most obvious limitation of the current LLM architecture (transformers). That’s why what should have been called GPT-5 is instead named GPT 4.5, it isn’t significantly better than the previous model despite having far more parameters, a lot more cleaned up training data and optimizations.

        The low-hanging fruit has already been picked, and most obvious optimizations have been implemented. As a result, almost all leading LLM companies are now operating at a similar level. There hasn’t been a real breakthrough in over two years. And the last huge architectural breakthrough was in 2017 (with paper "Attention is all you need").

        Scaling at this point yields only diminishing returns. So no, what you’re saying isn’t accurate, the ceiling is clearly visible now.

        • sanderjd a day ago

          I honestly think it's still way too early to say this either way. If your hypothesis that there are no breakthroughs left is right, then it's still a very big deal, but I'd agree with you that it's not steam engine level.

          But I don't think "the transformer paper was eight years ago" is strong evidence for that argument at all. First of all, the incremental improvement and commercialization and scaling that has happened in that period of time is already incredibly fast. Faraday had most of the pieces in place for electricity in the 1830s and it took half a century to scale it, including periods where the state of the art began to stagnate before hitting a new breakthrough.

          I see no reason to believe it's impossible that we'll see further step-change progressions in AI. Indeed, "Attention is All You Need" itself makes me think it's more likely than not. Out of the infinite space of things to try, they found a fairly simple tweak to apply to existing techniques, and it happened to work extremely well. Certainly a lot more of the solution space has been explored now, but there's still a huge space of things that haven't been tried yet.

      • whyowhy3484939 a day ago

        > We finally have a system which encodes not just basic things but high level concepts

        That's the thing I'm trying to convey: it's in fact not encoding anything you'll recognize and if it is, it's certainly not "concepts" as you understand them. Not saying it cannot correlate text that includes what you call "high level concepts" or do what you imagine to be useful work in that general direction. Again not making claims it's not useful, just saying that it becomes kind of meh once you factor in all costs and not just the hypothetical imaginary future productivity gains. AKA building literal nuclear reactors to do something that basically amounts to filling in React templates or whatever BS needs doing.

        If it was reasoning it could start with a small set of bootstrap data and infer/deduce the rest from experience. It cannot. We are not even close as in there is not even theory to get us there forget about the engineering. It's not a subtle issue. We need to throw literally all data we have at it to get it to acceptable levels. At some point you have to retrace some steps and think over some decisions, but I guess I'm a skeptic.

        In short it's a correlation engine which, again, is very useful and will go ways to improve our lives somewhat - I hope - but I'm not holding my breath for anything more. A lot of correlation does not causation make. No reasoning can take place until you establish ontology, causality and the whole shebang.

        • Flamentono2 a day ago

          I do understand it but i also think that the current LLMs are the first step to it.

          GPT-3 started proper investment into this topic, there was not enough research done in this direction and now it is. People like Yann LeCun already analyse different approaches/architecture but they still use the infrastructure of LLMs (ML/GPUs) and potentially the data.

          I never said that LLM is the breaktrhough in consesnes.

          But you can also ask LLM strategies for thinking. It can tell you a lot of things. We will see if a LLM will be a fundamental part of AGI or not but GPU/ML will probably be.

          I also think that the compression mechanism through LLM lead to concepts through optimization. You can see from the antropic paper, that an LLM doesn't work in normal language space but in a high dimensional one and then 'expresses' the output in a language you like.

          We also see that real multi modal models are better in a lot of tasks due to a lot more context available through them. Estimating what someone said due to context.

          The necessary infrastructure and power requirement is something i accept too. We can assume, i do, that further progress in a lot of topics will require this type of compute and it also solves our data bottleneck: normal CPU architecture is limited by memory databus.

          Also in comparision to a lot of other companies, if the richest companies in the world invest in nuclear, i think this is a lot better than any other companies. They have a lot higher margins and knowledge. co2 is a market separator for them too.

          I also expect this amount of compute to be the base for fixing real issues we all face like cancer or optimizing cancer or any other sickness detection. We need to make medicin a lot cheaper and if someone in africa can do a cheap x ray and send it to the cloud to get any feedback, that would / could help a lot of people.

          Doing complex and massive protein analysis or mRna research in virtual space, also requires GPUs.

          All of this happened in a timespan of only a few years. I have not seen anything progressing as fast as AI/ML currenly does and as unfortunate it is, this needs compute.

          Even my small inhouse image recognition fine tuning explodes when you do a handful parameter optimizations but the quality is a lot better than what we had before.

          And enabling people to have real natural language UI is HUGE. It makes so much more accessable. Not just for people with a disability.

          Things like 'do a eli5 on topic x'. "explain to me this concept" etc. I would have loved that when i tried to be successful in the university math curiculum.

          All of that is already crazy and still is. But in parallel what Nvidia and others currently do with ML and Robotics is also something which requires all of that compute. And the progress is again breath taking. The current flood of basic robots standing and walking around is due to ML.

          • th0ma5 a day ago

            I mean, you're not even wrong ! Most all of these large models are based on the idea that if you put all of the representations that we can of the world into a big pile that you can tease out some kind of meaning. There's not even really a cohesive theory as to that, and surely no testable way to prove that it's true. It certainly seems like you can make a system that behaves as if it could be like that, and I think that's what you're picking up on. But it's actually probably something else and something far shorter of that.

            • hnaccount_rng 4 hours ago

              There is an interesting analogy that my Analysis I professor once said: The intersection of all valid examples are also a definition of an object. In many ways this is, at least in my current understanding, how ML systems "think". So yeah it will take some superposition of examples and kind of try to interpolate between those. But fundamentally it is - at least so far - always an interpolation, not an extrapolation.

              Whether we consider that "just regurgitating Stackoverflow" or "it thought up the solution to my problem" mostly comes up to semantics

      • dvfjsdhgfv a day ago

        > There is not a single system out there today which can do what claude can do.

        Of course there is, it's called Gemini 2.5 Pro and it is also the reason I cancelled my Claude (and earlier OpenAI) subscriptions (I had quite a few of them to go around limits).

    • skydhash a day ago

      Yeah. It’s just fancier techniques than linear regression. Just like the latter takes a set of numbers and produces another set, LLMs takes words and produces another set of words.

      The actual techniques are the breakthrough. The result are fun to play with and may be useful in some occasions, but we don’t have to put them on a pedestal.

    • holoduke a day ago

      You have the wrong idea of how an LLM works. Its more like an model that iteratively finds associating / relevant blocks. The reasoning are the iterative steps it takes.

  • callc a day ago

    > “I'm not sure why people on HN (of all places) are so divided regarding the perception of AI/ML.”

    Everyone is a rational actor from their individual perspective. The people hyping AI, and the people dismissing the hype both have good reasons.

    The is justification to see this new tech as ground breaking. There is justification to be weary about massive theft of data and dismissiveness of privacy.

    First, acknowledge and respect that there are so many opinions on any issue. Take yourself out of the equation for a minute. Understand the other side. Really understand it.

    Take a long walk in other people’s shoes.

  • KurSix a day ago

    But on the flip side, the "AI will revolutionize science" narrative feels way ahead of what the evidence supports

  • sanderjd a day ago

    HN is always divided on "how much is the currently hype-y technology real vs just hype".

    I've seen this over and over again and been on different sides of the question on different technologies at different times.

    To me, this is same as it ever was!

    • aleph_minus_one a day ago

      I basically agree, but want to point out two major differences to other "hype-y" topics that existed in the past that in my opinion make the whole AI discussions on HN a little bit more controversial than other older hype discussions:

      1. The whole investment volume (and thus hope and expectations) into AI is much larger than into other hype topics.

      2. Sam Altman, the CEO of OpenAI, was president of YCombinator, the company begind Hacker News, from 2014 to 2019.

      • sanderjd a day ago

        On (1): Investment volume relative to what? To me, it looks like a very similar pattern of investors crowding into the currently hot thing, trying to get a piece of the winners of the power law.

        On (2): I'm honestly not sure I think this is making a big difference at all. Not much of the commentary here is driven by YC stuff, because most of the audience here has no direct entwinement with YC.

        • og_kalu a day ago

          >On (1): Investment volume relative to what? To me, it looks like a very similar pattern of investors crowding into the currently hot thing, trying to get a piece of the winners of the power law.

          The profile of investors (nearly all the biggest tech companies amongst others) as well as how much they're willing to and have put down (billions) is larger than most.

          Open AI alone just started work on a $100B+ datacenter (Stargate)

          • sanderjd a day ago

            Yeah maybe I buy it. But it reminds me of the investment in building out the infrastructure of the internet. That predates HN, but it's the kind of thing we would have debated here if we could have :)

  • Workaccount2 a day ago

    The ultimate job of a programmer is to translate human language into computer language. Computers are extremely capable, but they speak a very cryptic overtly logical language.

    LLMs are undeniably treading onto that territory. Who knows how far in they will make it, but the wall is breached. Which is unsettling to down right scary depending on your take. It is a real threat to a skill that many have honed for years and for which is very lucrative to have. Programmers don't even need to be replaced, having to settle for $100k/yr in a senior role is almost just a scary.

    • kbelder a day ago

      Yes, but the scale isn't 'unsettling' to 'scary'... it's from 'incredible' to 'scary'.

  • Retr0id a day ago

    Google never gave a good reason for why they stopped making their cache public, but my theory is that it was because people were scraping it to train their LLMs.

  • corytheboyd a day ago

    > Just last week i asked for a script to do image segmentation with a basic UI and claude just generated that for me in under 1 Minute.

    I agree that this is useful! It will even take natural language and augment the script, and maybe get it right! Nice!

    The AI is combing through scraped data with an LLM, and conjuring forth some imagemagick snippets into a shell script. This is very useful, and if you’re like most people, who don’t know imagemagick intimately, it’s going to save you tons of time.

    Where it gets incredibly frustrating is tech leadership seeing these trivial examples, and assuming it extrapolates to general software engineering at their companies. “Oh it writes code, or makes our engineers faster, or whatever. Get the managers mandating this, now! Also, we need to get started on the layoffs. Have them stack rank their reports by who uses AI the best, so that we are ready to pull the trigger.”

    But every real engineer who uses these tools on real (as in huge, poorly written) codebases, if they are being honest (they may not be, given the stack ranking), will tell you “on a good day it multiplies my productivity by, let’s say, 1.1-2x? On a bad day, I end up scrapping 10k lines of LLM code, reading some documentation on my own, and solving the problem with 5 lines of intentional code.”

    Please, PLEASE pay attention to this details that I added: Huge, poorly written codebases. This is just the reality at most software companies that have graduated from series A startup. What my colleagues and I are trying to tell you, leadership, is that these “it made a script” and “it made a html form with a backend” examples ARE NOT cleanly extrapolating to the flaming dumpster fire codebases we actually work with. Sometimes the tools help! Sometimes, they don’t.

    It’s as if LLM is just another tool we use sometimes.

    This is why I am annoyed. It’s incredibly frustrating to be told by your boss “use tool or get fired” when that tool doesn’t always fit the task at hand. It DOES NOT mean I see zero value in LLMs.

  • evilfred a day ago

    most work in software jobs is not making one-off scripts like in your example. a lot of the job is about modifying existing codebases which include in-house approachs to style and services along with various third party frameworks like Spring driven by annotations, and requirements around how to write tests and how many. AI is just not very helpful here, you spend more time spinning wheels trying to craft the absolute perfect script than just making code changes directly.

  • Barrin92 a day ago

    >but alone for every researcher who needs to write code for their research, AI can make them already a lot more efficient.

    scientists don't need to be efficient, they need to be correct. Software bugs were already a huge cause of scientific error, and responsible for lack of reproducibility, see for example cases like this (https://www.vice.com/en/article/a-code-glitch-may-have-cause...)

    Programming in research environments is done with some notoriously questionably variation in quality, as is the case for the industry to be fair, but in research minor errors can ruin results of entire studies. People are fed up and come to much harsher judgements on AI because in an environment like a lab you cannot write software with the attitude of an impressionist painter or the AI equivalent, you need to actually know what you're typing.

    AI can make you more efficient if you don't care if you're right, which is maybe cool if you're generating images for your summer beach volleyball event, but it's a disastrous idea if you're writing code in a scientific environment.

    • Flamentono2 a day ago

      I do expect a researcher to verify the way the code interacts with the data set.

      Still a lot of researchers can benefit from code tools for their daily work to make them a lot faster.

      And plenty of strategies exist to saveguard this. Tool use for example, unit tests etc.

  • dvfjsdhgfv a day ago

    There is no single reason. Nobody will argue that LLMs are already quite useful at some tasks if used properly.

    As for the opposing view, there are so many reasons.

    * Founders and other people who bet their money on AI try to pump up the hype in spite of problems with delivery

    * We know some of them are plainly lying, but the general public doesn't

    * They repeat their assumptions as facts ("AI will replace most X and Y jobs by year Z")

    * We clearly see that the enormous development of LLMs has plateaued but they try to convince the general public it's the contrary

    * We see the difference on how a single individual (Aaron Swartz) is treated when making a small copyright infringement, and how the consequences for AI companies like OpenAI or Meta who copied the whole contents of Libgen are non-existent.

    * Some people like me just hate AI slop - in writing and imaging. It just puts me off and I stop reading/watching etc.

    There are many more points like this.

ausbah a day ago

> After a few weeks of failure, I messaged a friend at a different university, who told me that he too had tried using PINNs, but hadn’t been able to get good results.

not really related to AI but this reflects a lesson I learned too late during some research in college: constant collaboration is important because it helps you avoid retreading over areas where others have already failed

  • mmarian a day ago

    Or the need for researchers to publish their failured experiments?

  • thearn4 a day ago

    Another reason why the idea of AI agents for science hasn't made much sense to me. Research is an extremely collaborative set of activities. How good would a researcher be who is very good at literature review, but never actually talks to anyone, goes to any conferences, etc?

omneity 2 days ago

The article initially appears to suggest that all AI in science (or at least the author’s field) is hype. But their gripe seems to be specific to an architecture named PINN that seems to be overhyped, as they mention in the end how they end up using other DL models to successfully compute PDEs faster than traditional numerical methods.

  • geremiiah 2 days ago

    It's more widespread than PINNs. PINNs have been widely known to be rubbish a long time ago. But the general failure of using ML for physics problems is much more widespread.

    Where ML generally shines is either when you have relatively lots of experimental data with respect to a fairly narrow domain. This is the case for machine learned interatomic potentials MLIPs which have been a thing since the '90s. Also potentially the case for weather modelling (but I do not want to comment about that). Or when you have absolute insane amounts of data, and you train a really huge model. This is what we refer to as AI. This is basically why Alphafold is successful, and Alphafold still fails to produce good results when you query it on inputs that are far from any data points in its training data.

    But most ML for physics problems tend to be somewhere in between. Lacking experimental data and working with not enough simulation data because it is so expensive to produce. And also training models that are not large enough, because inference would be too slow, anyway, if they were too big. And then expecting these models to learn a very wide range of physics.

    And then everyone jumps in on the hype train, because it is so easy to give it a shot. And everyone gets the same dud results. But then they publish anyway. And if the lab/PI is famous enough or if they formulate the problem in a way that is unique and looks sciency or mathy, they might even get their paper in a good journal/conference and get lots of citations. But in the end, they still only end up with the same results as everyone else: replicates the training data to some extent, somebody else should work on the generalizability problem.

  • hyttioaoa 2 days ago

    He published a whole paper providing a systematic analysis of a wide range of models. There's a whole section on that. So it's not specific to PINN.

    • BlueTemplar 8 hours ago

      The use of the term «AI» is, yet again, annoying by its vagueness.

      I'm assuming that they do not refer to the general use of machines to solve differential equations (whether exactly or approximately), which is centuries old (Babbage's engine).

      But then how restricted these «Physics-Informed Neural Networks» are ? Are there other methods using Neural Networks to solve differential equations ?

  • nottorp 2 days ago

    Replace PINN with any "AI" solution for anything and you'll still find it overhyped.

    The only realistic evaluations of "AI" so far are those that admit it's only useful for experts to skip some boring work. And triple check the output after.

sublimefire a day ago

Great analysis and spot on examples. Another issue with AI related research is that a lot of papers are new and not that many get published in “proper” places, yet being quoted right/left/center, just look at google scholar. It is hard to repro the results and check the validity of some statements, not to mention that research which was done 4 years ago used one set of models and now another set of models with different training data is used in tests. It is hard to establish what really affects the results and if the conclusions are applicable to some specific property of the outdated model or if it is even generalisable.

  • skydhash a day ago

    I’m not a scientist or a researcher, but anything based on statistics and data interpretation is immediately subject to my skepticism.

    • sn9 a day ago

      This is silly.

      There are practices like pre-registration, open data, etc. that can make results much more transparent and replicable.

shalmanese 2 days ago

This is less an article about AI and more about, one of the less talked about functions of a PhD program is becoming literate at “reading” academic claims beyond their face value.

None of the claims made in the article are surprising because they’re the natural outgrowth of the hodgepodge of incentives we’ve accreted as what we call “science” over time and you just need to practice over time to be able to place the output of science in the proper context and understand that a “paper” is an artifact of a sociotechnical system with all the entailing complexity that demands.

pawanjswal 2 days ago

Appreciate the honesty. AI isn’t magic, and it’s refreshing to see someone actually say it out loud.

indoordin0saur a day ago

I saw the name of the blog owner (A "Timothy B. Lee") and was surprised to see that the ~70 year old inventor of HTTP and the web had such an active and cutting-edge blog.

plasticeagle 2 days ago

Does anybody else find it peculiar that the majority of these articles about AI say things like "of course I don't doubt that AI will lead to major discoveries", and then go on to explain how they aren't useful in any field whatsoever?

Where are the AI-driven breakthroughs? Or even the AI-driven incremental improvements? Do they exist anywhere? Or are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?

  • strogonoff 2 days ago

    There is rarely a constructive discussion around the term “AI”. You can’t say anything useful about what it might lead to or how useful it might be, because it is purely a marketing term that does not have a specific meaning (neither do both of the words in its abbreviation).

    Interesting discussions tend to avoid “AI” in favour of specific terms such as “ML”, “LLM”, “GAN”, “stable diffusion”, “chatbot”, “image generation”. These terms refer to specific tech and applications of that tech, and allow to argue about specific consequences for sciences or society (use of ML in biotech vs. proliferation of chatbots).

    However, certain sub-industries prefer “AI” precisely because it’s so vague, offers seemingly unlimited potential (please give us more investment money/stonks go up), and creates a certain vibe of a conscious being useful when pretending not to be working around IP laws and creating tools based on data obtained without relevant licensing agreements (cf. the countless “if humans have the freedom to read, therefore it’s unfair to restrict the uses of a software tool” fallacies, often perpetuated even by seemingly technically literate people, in pretty much every relevant forum thread).

    • rickdeckard a day ago

      It's not even that certain sub-industries prefer "AI", it's the umbrella term a company can use in Marketing for virtually any automated process that provides a seemingly subjective result.

      Case in point:

      For a decade the implementation of cameras went through development, testing and tuning of Auto Exposure, Auto Focus and Auto White-Balance ("AAA") engines as well as image post-processing.

      These engines ran on a Image Signal Processor (ISP) or sometimes on the Camera sensor itself, extensive work was done by Engineering Teams on building these models in order to optimize them to run on low-latency on an ISP.

      Suddenly AI came along and all of these features became "AI features". One company started with "AI assisted Camera" to promote the process everyone was doing all-along. So all had to introduce AI, without any disruptive change in the process.

      • Aldipower a day ago

        I remember somethings similar when the term "cloud" came up. It is still someone else's server or datacenter with tooling.

        • johnisgood a day ago

          Yeah, can they just stop coining terms to refer to old, pre-existing things? I still hate the term "cloud".

          • idontwantthis a day ago

            My favorite description is “The cloud is a computer you don’t own in Reston, VA”

      • fluidcruft a day ago

        I agree it's completely meaningless. At this point I think marketing would label a toilet fill valve as "AI".

        • sebastiennight a day ago

          Well the "smart toilet" is definitely a thing you can buy today:

          > The integration of Artificial Intelligence (AI) and the Internet of Things (IoT) in bathroom fixtures, particularly toilets, is shaping the future of hygiene, convenience, and sustainability.

          • elcritch a day ago

            While automated AI measurement of the chemical makeup of .. human effluent could be helpful for tracking health trends, I fear it’d also come with built in integrations for Instagram and TikTok.

            • sebastiennight 6 hours ago

              Good news! The integration will be used to customize your feed by recommending foods and medicine that you might enjoy.

              The future is allowing advertisers to bid on specific proteins and triglyceride chains detected by the smart toilet.

            • GuinansEyebrows a day ago

              or (and i can actually see this happening) an Amazon integration to reorder bathroom tissue and bowl cleaner

      • giantrobot a day ago

        > One company started with "AI assisted Camera" to promote the process everyone was doing all-along.

        Before the "AI" labeling more advanced image processing was often called "computational photography". At least in the world of smartphone cameras. Because they have tiny image sensors and lenses smartphone cameras need to do a lot of work to get a decent image out of any environment that doesn't have perfect lighting. The processing is more traditional computer vision.

        There's not legitimate generative AI features being peddled like editing people out of (or into) photos. But most of the image processing pipelines haven't fundamentally changed but not have AI labeling to please marketers and upper management.

    • roenxi a day ago

      Also, the strong predictions about AI are using a vague term because the tech often doesn't exist yet. There isn't a chatbot right now that I feel confident can out-perform me at systems design but I'm pretty certain something that can is coming. Odds are also good that in 2-4 years there will be new hotness to replace LLMs that are much more functional (maybe MLLMs, maybe called something else). We can start to predict and respond to their potential even though they don't exist yet; it just takes a little extrapolating. But it doesn't have a name yet.

      Which is to agree - obviously if people are talking about "AI" they don't want to talk about something that exists right this second. If they did it'd be better to use a precise word.

      • Closi a day ago

        Totally agree.

        Also the term 'LLM' is more about the mechanics of the thing than what the user gets. LLM is the technology, but some sort of automated artificial intelligence is what people are generally buying.

        As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".

        Although again, the term is entirely abused to the extent that washing machines can contain 'AI'. Although just because a term is abused doesn't necessarily mean it's not useful - everything had "Cloud" in it 10 years ago but that term was still useful enough to stick around.

        Perhaps there is an issue that AI can mean lots of things, but I don't know yet of another term that encapsulates the last 5 years advancements in automated intelligence, and what that technology is likely to be moving forwards, which people will readily recognise. Perhaps we need a new word, but AI has stuck and there isn't a good alternative yet, so is probably here to stay for a bit!

        • strogonoff 16 hours ago

          > As an example, when people use ChatGPT and get an image back, most don't think "oh, so the LLM called out to a diffusion API?" - they just think "oh chat GPT can give me an image if I give it a prompt".

          Note: your first part skipped entirely the process of obtaining the data for and training of both of the above, which is a crucial part at least on par with what called which API.

          I don’t think it’s unreasonable to expect people to build an intuition for it, though. It’s healthy when underlying processes are understood to at least a few layers of abstraction, especially in potentially problematic or morally grey areas.

          As an analogy to your example, you could say that when people drink milk they usually don’t think “oh, so this cow was forced to reproduce 123 times with all her children taken away and murdered so that it makes more milk” and simply think “the cow gave this milk”.

          However, like with milk, like with the ML tech, it is important to realize that 1) people do indeed learn the former and build the relevant intuitions, and 2) the industry is reliant on information asymmetry and mass ignorance of these matters (and we all know that information asymmetry is the #1 enemy of free market working as designed).

        • giantrobot a day ago

          > Although again, the term is entirely abused to the extent that washing machines can contain 'AI'.

          I remember when the exciting term in appliances was "fuzzy logic". As a technology it was just adding some sensors beyond simple timers and thermostats to control things like run time and temperatures of automated washers.

    • mnky9800n a day ago

      This article is all about PINNs being overblown. I think it’s a reasonable take. I’ve seen way too many people dump all their eggs in the PINNs basket when there are plenty of options out there. Those options just don’t include a ticket to the hype train.

    • Closi a day ago

      I think AI is a useful term which usually means a neural network architecture but without specifying the exact architecture.

      I think Machine Learning doesn't mean this as a word, as it can also refer to linear regression, non-linear optimisation, decision trees, bayesian networks etc.

      That's not saying that AI isn't abused as a term - but I do think a more general term to describe the latest 5 years advancements in neural networks to solve problems is useful. Particularly as it's not obvious which model architectures would apply to which fields without more work (or even if novel architectures will be required for frontier science applications).

      • GrantMoyer a day ago

        The field of neural network research is known as Deep Learning.

        • danielbln a day ago

          Eh, not really. All Deep Learning involves neural networks, but not all neural networks are part of deep learning. To be fair, any modern network is also effectively built by deep learning, but your statement as such is inaccurate.

      • daveguy a day ago

        This is incorrect. Machine Learning is a term that refers to numerical as opposed to symbolic AI. ML is a subset of AI as is Symbolic / Logic / Rule based AI (think expert systems). These are all well established terms in the field. Neural Networks include deep learning and LLMs. Most AI has gone the way of ML lately because of the massive numerical processing capabilities available to those techniques.

        AI is not remotely limited to Neural Networks.

    • jgalt212 a day ago

      > There is rarely a constructive discussion around the term “AI”.

      You hit the nail on the head there. AI, in its broadest terms, exists at the epicenter of hype and emotions.

  • currymj a day ago

    AlphaFold is real.

    To the extent you care about chess and Go as human activities, progress there is real.

    there are some other scientific computing problems where AI or neural-network-based methods do appear to be at least part of the actual state-of-the-art (weather forecasting, certain single-molecule quantum chemistry simulations).

    i would like the hype of the kind described in the article to be punctured, but this is hard to do if critics make strong absolute claims ("aren't useful in any field whatsoever") which are easily disproven. it hurts credibility.

    • daveguy a day ago

      I've never seen an AI critic say AI isn't "useful in any field whatsoever". Especially one that is known as an expert in and a critic of the field. There may be names that aren't coming to mind because that stance would reduce their specific credibility. Do you have some in mind?

      • currymj a day ago

        the post that I replied to?

        • daveguy a day ago

          The post you replied to was asking for examples based on the general critical discussions they have seen.

          And no offense to the GP, but they clearly aren't an expert in the field or they wouldn't be asking.

          Probably should have replied directly to the post you replied to as much as yours. Was just pointing out that "not useful in any field whatsoever" is not something I've seen from anyone in the field. Even the article doesn't say that.

    • literalAardvark a day ago

      Even more relvant, AlphaEvolve is real.

      Could easily be brick 1 of self-improvement and the start of the banana zone.

      • gthompson512 a day ago

        > "the start of the banana zone"

        What does this mean? Is it some slang for exponential growth, or is it a reference to something like the "paperclip maximizer"?

        • literalAardvark a day ago

          It's slang for the J part of the exponential curve. Didn't expect that to be a problem here, sorry.

      • dirtyhippiefree a day ago

        I’m with gthompson512

        Sounds like a routine Bill Hicks might have come up with if he was still with us.

        He hated obfuscation.

  • simianparrot 2 days ago

    It’s why it keeps looking exactly like NFT’s and crypto hype cycles to me: Yes the technology has legitimate uses, but the promises of groundbreaking use cases that will change the world are obviously not materialising and to anyone that understands the tech it can’t.

    It’s people making money off hype until it dies and move on to the next scam-with-some-use.

    • Flamentono2 2 days ago

      We already have breakthroughs. Benchmark results which have been unheard of before ML.

      Alone language translation got so much better, voice syntesis, voice transcription.

      All my meetings now are searchable and i can ask 'ai' to summarize my meetings in a relative accurate way impossible before that.

      Alphafold made a breakthrough in protein folding.

      Image and Video generation can now do unbelievable things.

      Realtime voice communication with computer.

      Our internal company search suddenly became usefull.

      I have 0 use case for NFT and Crypto. I have tons of use case for ML.

      • parodysbird a day ago

        > Alphafold made a breakthrough in protein folding.

        Sort of. Alphafold is a prediction tool, or, alternatively framed, a hypothesis generation tool. Then you run an experiment to compare.

        It doesn't represent a scientific theory, not in the sense that humans use them. It does not have anywhere near something like the accuracy rate for hypotheses to qualify as akin to the typical scientific testing paradigm. It's an incredibly powerful and efficient tool in certain contexts and used correctly in the discovery phase, but not the understanding or confirmation phase.

        It's also got the usual pitfalls with differentiable neural nets. E.g. you flip one amino acid and it doesn't really provide a proper measure of impact.

        Ultimately, one major prediction breakthrough is not that crazy. If we compare that to e.g. Random Forest and similar models, the impact in science is infinitely more with them.

        • elcritch a day ago

          We already have a precise and accurate theory for protein folding. What we don’t have is the computational power to do true precise simulations at a scale and speed we’d like.

          In many aspects a huge tangled barely documented code base written by inexperienced grad students of quantum shortcuts, err, perturbative methods isn’t that much more or less intelligible than an AI model learning those same methods.

          • dekhn a day ago

            What "precise and accurate theory for protein folding" exists?

            Nobody has been able to demonstrate convincingly that any simulation or theory method can reliably predict the folding trajectory of anything but the simplest peptides.

            • elcritch a day ago

              > What "precise and accurate theory for protein folding" exists?

              It’s called Quantum Mechanics.

              > Nobody has been able to demonstrate convincingly that any simulation or theory method can reliably predict the folding trajectory of anything but the simplest peptides.

              No we don’t have simplified models or specialized theories to reduce the computational complexity enough to efficiently solve the QM or even molecular dynamics systems needed to predict protein folding for more than the simplest peptides.

              Granted, it’s common to mix up things and say that not having a computationally tractable models means we don’t have precise and accurate theory of PF. Something like [0] resulting in an accurate, precise, and fast theory of protein folding would be incredibly valuable. This however, may not be possible outside specific cases. Though I believe AlphaFold indicates otherwise as it appears life has evolved various building blocks which enable a simpler physics of PF tractable to evolutionary processes.

              Quantum computing however could change that [1]. If practical QM is feasible that is, which it’s beginning to look more and more likely. Some say QC is already proven and just needs scaled up.

              0: https://en.m.wikipedia.org/wiki/Folding_funnel 1: https://www.nature.com/articles/s41534-021-00368-4

              • dekhn a day ago

                I don't think anybody is 100% certain that doing a full quantum simulation of a protein (in a box of water) would recapitulate the dynamics of protein folding. It seems like a totally reasonable claim, but one that could not really be evaluated.

                If you have a paper that makes a strong argument around this claim, I'd love to see it. BTW- regarding folding funnels, I learned protein folding from Ken Dill as a grad student in biophysics at UCSF, and used to run MD simulations of nucleic acids and proteins. I don't think anybody in the field wants to waste the time worrying about running full quantum simulations of protein folding, it would be prohibitevly expensive even with far better QM simulators than we have now (IE, n squared or better).

                Also the article you linked- they are trying to find the optimal structure (called fold by some in the field). That's not protein folding- it's ground state de novo structure prediction. Protein folding is the process by which an unfolded protein adopts the structured state, and most proteins don't actually adopt some single static structure but tend to interconvert between several different substructes that are all kinetically accessible.

                • elcritch 13 hours ago

                  > I don't think anybody is 100% certain that doing a full quantum simulation of a protein (in a box of water) would recapitulate the dynamics of protein folding.

                  True, until it's experimentally shown there's still some possibility QM wouldn't suffice. Though I've not read anything that'd give reason to believe QM couldn't capture the dynamic behavior of folding, unlike the uncertainty around dark matter or quantum supremacy or quantum gravity.

                  Though it might be practically impossible to setup a simulation using QM which could faithfully capture true protein folding. That seems more likely.

                  > It seems like a totally reasonable claim, but one that could not really be evaluated.

                  If quantum supremeacy holds, my hunch would be that it would be feasible to evaluate it one day.

                  The paper I linked was mostly to showcase that there seem to be approaches utilizing quantum computing to speed up solving QM simulations. We're still in the early days of quantum computing algorithms and it's unclear what's possible yet. Tackling a dynamic system like a unfolded protein folding is certainly a ways away though!

                  > Also the article you linked- they are trying to find the optimal structure (called fold by some in the field). That's not protein folding- it's ground state de novo structure prediction.

                  Thanks! I haven't worked on quantum chemistry for many years, and only tangentially on protein folding, so useful to know the terminology. The meta table states and that whole possibility of folding states / pathways / etc fascinates me as potentially being emergent property of protein folding physics and biology as we know it.

              • parodysbird 20 hours ago

                > It’s called Quantum Mechanics.

                Nobody is suggesting anything entails a possible violation of quantum mechanics, so yes, obviously any system under inquiry is assumed to abide by QM.

      • mountainb a day ago

        One one hand, maybe it's good to have better searchable records, even if it's hard to quantify the benefit.

        On the other hand, now all your meetings produce reams of computer searchable records subject to discovery in civil and criminal litigation, possibly leading to far worse liability than would have been possible in a mostly email based business.

        • DennisP a day ago

          Maybe don't do crimes?

          If the technology provides a major boost in productivity to ethical teams, and is useless for unethical teams, that kinda seems like a good thing.

      • Yoric a day ago

        That is absolutely correct.

        The problem is that the hype assumes that all of this is a baseline (or even below the baseline), while there are no signs that it can go much further in the near future – and in some cases, it's actually cutting-edge research. This leads to a pushback that may be disproportionate.

      • uludag 2 days ago

        I'm sure there's many people out there who could say that they hardly use AI but that crypto has made them lots of money.

        At the end of the day searching work documents and talking with computers is only desirable inasmuch as they are economically profitable. Crypto at the end of the day is responsible for a lot of people getting wealthy. Was a lot of this wealth obtained on sketchy grounds? probably, but the same could be said AI (for example, the recent sale of windsurf for an obscene amount of money).

        • Flamentono2 a day ago

          Crypto is not making people rich, it is about moving money from Person A to Person B.

          And sure everyone who got the money from others by gambling are biased. Fine with me.

          But in comparision to crypto, people around me actually use AI/ML (most of them).

          • nodar86 a day ago

            Every activity that is making people rich is by definiton moving money from Person A to Person B.

            • Flamentono2 12 hours ago

              Lets be nitpicky :)

              I said move money from A to B, which implies that nothing else is happening. Otherwise it would be an exchange.

              Sooo i would say my wording was right?! :)

        • holoduke a day ago

          Crypto is not creating anything. Its a scheme that is based on gamble. Person A gets rich. Person B loses money. It does not really contribute to anything.

        • littlestymaar 2 days ago

          > is only desirable inasmuch as they are economically profitable.

          The bug difference is that they are profitable because they create value, when cryptocurrencies are a zero sum game between participants. (It is in fact a negative-sum game, since some people are getting paid to make the thing work so that others can gamble on the system).

      • vanattab a day ago

        Which ai program do you use for live video meeting translation?

        • Flamentono2 a day ago

          MS Teams, Google Meet (whatever they use, probably gemini) and wispher

      • exe34 a day ago

        You have to understand, real AI will never exist. AI is that which a machine can't do yet. Once it can do it, it's engineering.

    • StopDisinfo910 2 days ago

      I don’t remember when NFTs and cryptos helped me draft an email, wrote my meetings minutes for me or allowed me to easily search information previously locked in various documents.

      I think there is this weird take amongst some on HN where LLMs are either completely revolutionary and making break through or utterly useless.

      The truth is that they are useful already as a productivity tool.

      • squidbeak a day ago

        I think imagination may be the reason for this. Enthusiasts have kept that first wave of amazement at what AI is able to do, and find it easier to anticipate where this could lead. The pessimists on the other hand weren't impressed with its capabilities in the first place - or were, and then became disillusioned for something it couldn't do for them. It's naturally easier to look ahead from the optimistic standpoint.

        There's also the other category who are terrified about the consequences for their lives and jobs, and who are driven in a very human way to rubbish the tech to convince themselves it's doomed to failure.

        The optimists are right of course. A nascent technology at this scale and with this kind of promise, whose development is spurring a race between nation states, isn't going to fizzle out or plateau, however much its current iterations may come short of any particular person's expectations.

        • aleph_minus_one a day ago

          > It's naturally easier to look ahead from the optimistic standpoint.

          It is similarly easy to look ahead from a pessimist standpoint (e.g. how will this bubble collapse, and who will pay the bill for the hype?). The problem rather is that this overhyped optimistic standpoint is much more encouraged from society (and of course from the marketing).

          > There's also the other category who are terrified about the consequences for their lives and jobs, and who are driven in a very human way to rubbish the tech to convince themselves it's doomed to failure.

          There is also a third type who are not terrified of AI, but of the bad decisions managers (will) make because of all this AI craze.

          • squidbeak a day ago

            No, I meant it's easier for optimists to look ahead at the possibilities inherent in the tech itself, which isn't true of pessimists, who - as you show - see instead the pattern of failed techs in it, whether that pattern matches AI or not.

            If you can see the promise, you can see a gap to close between current capability and accomplished product. The question is then whether there's some barrier in that gap to make the accomplished product impossible forever. Pessimists tend to have given up on the tech already, so to them any talk about closing that gap is idle daydreaming or hype.

      • lazide 2 days ago

        Having tried to use various tools - in those specific examples - I found them either pointless or actively harmful.

        Writing emails - once I knew what I wanted to convey, the rest was so trivial as to not matter, and any LLM tooling just got in the way of actually expressing it as I ended up trying to tweak the junk it was producing.

        Meeting minutes - I have yet to see one that didn’t miss something important while creating a lot of junk that no one ever read.

        And while I’m sure someone somewhere has had luck with the document search/extract stuff, my experience has been that the hard part was understanding something, and then finding it in the doc or being reminded of it was easy. If someone didn’t understand something, the AI summary or search was useless because they didn’t know what they were seeing.

        I’ve also seen a LOT of both junior and senior people end up in a haze because they couldn’t figure out what was going on - and the AI tooling just allowed them to produce more junk that didn’t make any sense, rather than engage their brain. Which causes more junk for everyone to get overwhelmed with.

        IMO, a lot of the ‘productivity’ isn’t actually, it’s just semi coherent noise.

        • denvrede 2 days ago

          +1 for all of the above.

          > Meeting minutes - I have yet to see one that didn’t miss something important while creating a lot of junk that no one ever read.

          Especially that one. In the beginning for very structured meetings with a low number of participants it seemed to be ok but once they got more crowded, maybe not all are native speakers and took longer than 30 minutes (like workshops) it went bad.

        • silon42 2 days ago

          > Writing emails - once I knew what I wanted to convey, the rest was so trivial as to not matter, and any LLM tooling just got in the way of actually expressing it as I ended up trying to tweak the junk it was producing.

          +1 LLM will help you produce the "filler" nobody wants the read anyway.

          • DennisP a day ago

            That's ok, the recipient can use an LLM to summarize it.

            In the end, we'll all read and write tight little bullet points, with the LLM text on the wire functioning as the world's least efficient communication protocol.

      • mountainriver a day ago

        I don’t remember when they wrote half my code in a fraction of the time for my high paid SWE job.

        I do have a bad memory from all the weed though, so who knows

      • apwell23 a day ago

        > wrote my meetings minutes

        why is this such a posterchild for llms. everyone always leads with this.

        how boring are these meetings and do ppl actually review these notes? i never ever saw anyone reading meeting minutes or even mention them.

        Why is this usecase even mentioned in LLM ads.

        • StopDisinfo910 11 hours ago

          Because meeting minutes are hard, annoying to do and LLMs are good at it.

          To be blunt, I think most HNers are young software developers, never attend any meetings of significance and don’t have to deal with many différent topics so they fail to see the usefulness because they are not in a position to understand it.

          The tails are everywhere like people mentioning work orders which is something extremely operational. If nothing messy and complicated is discussed in the meetings you attend, it’s no surprise you don’t get why minutes are useful or where the value is. It doesn’t mean there is no value.

          • apwell23 8 hours ago

            ok i'll take your word for it that people read meeting notes.

            • StopDisinfo910 4 hours ago

              Meeting notes are not only there to be read. Their usefulness is that they are a trace of what was said and decided that everyone in the meeting agreed upon which is extremely important as soon as things get political.

        • batty_alex a day ago

          I think the same thing every time. I've never had anyone read my meeting notes and they're better off in some sort of work order system anyways.

          All I'm hearing is an appeal to making the workplace more isolating. Don't talk to each other, just talk to the machine that might summarize it wrong

        • SiempreViernes a day ago

          Indeed, it seems doubtful that an org having so structurless meetings that they are struggling to write minutes is capable of having meetings for which minutes serves any purpose beyond covering ass.

      • bgnn 2 days ago

        Exactly this. What we expect from them is our speculation. In reality nobody knows the future and there's no way to know the future.

        • bgnn a day ago

          Wow I didn't expect this to be down voted. I guess there are people who knows the future.

      • cornholio 2 days ago

        For now, the reasoning abilities of the best and largest models are somewhat on par with those of a human crackpot with an internet connection, that misunderstands some wild fact or theory and starts to speculate dumb and ridiculous "discoveries". So the real world application to scientific thought is low, because science does not lack imbeciles.

        But of course, models always improve and they never grow tired (if enough VC money is available), and even an idiot can stumble upon low hanging fruits overlooked by the brightest minds. This tireless ability to do systematic or brute-force reasoning about non-frontier subjects is bound to produce some useful results like those you mention.

        The comparison with a pure financial swindle and speculative mania like NFTs is of course an exaggeration.

        • Xmd5a a day ago

          I see myself in these words:

          >that misunderstands some wild fact or theory and starts to speculate dumb and ridiculous "discoveries"

          >even an idiot can stumble upon low hanging fruits overlooked by the brightest minds.

        • datadrivenangel a day ago

          I want an idiot to stumble on the low hanging fruits of my meeting minutes.

      • ktallett 2 days ago

        The hype surrounding them is not as a pa and tbh a lot of these use cases already have existing methods that work just fine. There are ways to find key information in files already, and speedy meeting minutes is really just a template away.

        • Flamentono2 2 days ago

          Absolutly not true

          I was not able to get meeting transcription in that quality that cheap ever before. I followed dictation software for over a decade and tx to ML the open source software is suddenly a lot better than ever before.

          Our internal company search with state of the art search indexes and search software was always shit. Now i ask an agent about a product standard and it just finds it.

          Image generation never existed before.

          Building a chatbot in a way that it actually does what you expect and its more complicated than answering the same 10 theoretical features it can do was hard and never really good and it now just works.

          Im also not aware of any software rewriting or even writing documents for me, structer them etc.

          • ktallett 2 days ago

            A lot of these issues you have had are simply user error or not using the right tool for the job.

            • Flamentono2 a day ago

              I work for one very big software company.

              If this was 'a simple user error' or 'not using the right tool for the job' than this was an error from smart people and it still got fixed by using AI/ML in an instant.

              With this, my argument still stands even if it would be for a different reason which i personally doubt.

              • ktallett a day ago

                Often big companies are the least efficient. And big companies can still make mistakes or have very inefficient processes. There was already a perfectly simple solution to the issue that could have been utilised prior to this and overall still the most efficient solution.

                Also, everyone does dumb things, even smart people do dumb things. I do research in a field that many outsiders would say you must be smart to do (not my view) and every single one of us does dumb shit daily. Anyone who thinks they don't isn't as smart as they think they are.

            • StopDisinfo910 2 days ago

              Well, LLMs are the right tool for the job. They just work.

              I mean if you are going to deny their usefulness in the face of plenty of people telling you they actually help, it’s going to be impossible to have a discussion.

              • ktallett a day ago

                They can be useful, however for admin tasks, there are plenty of valid alternatives that really take no longer time wise so why bother using all that computing power.

                They don't just work though, they are not fool proof and definitely require double checking.

                • StopDisinfo910 a day ago

                  > valid alternatives that really take no longer time wise

                  That’s not my experience.

                  We use them more and more at my job. It was already great for most office tasks including brainstorming simple things but now suppliers are starting to sell us agents which pretty much just work and honestly there are a ton of things for which LLMs seem really suited for.

                  CMDB queries? Annoying SAP requests for which you have to delve through dozens of menus? The stupid interface of my travel management and expense software? Please give me a chatbot for all of that which can actually decipher what I’m trying to do. These are hours of productivity unlocked.

                  We are also starting to deploy more and more RAG on select core business dataset and it’s more useful than even I anticipated and I’m already convinced. You ask, you get a brief answer and the documents back. This used to be either hours of delving through search results or emails with experts.

                  As imperfect as they are now, the potential value of LLMs is already tremendous.

                  • ktallett a day ago

                    How do you check accuracy of these? You stated brainstorming as an example that they are great at. As obviously experts are experts for a reason.

                    My issue here is that a lot of this is solved by good practice, for example,travel management and expenses have been solved, company credit card. I don't need one slightly better piece of software to manage one terrible piece of software to solve an issue that has a solution.

                    • StopDisinfo910 11 hours ago

                      > How do you check accuracy of these?

                      Because LLMs send you back links to the tools and you still get the usual confirmation process when you do things.

                      The main issue never was knowing what to do but actually getting the tools to do it. LLMs are extremely good at turning messy stuff into tools manipulation especially where there never was an API available in the first place.

                      It’s not a question of practices. Anyone who has ever worked for a very large company knows that systems are complicated by need and everything move at the speed of a freighter ship if you want to make significant changes.

                      Of course we need one slightly better piece of software to manage terrible pieces of software. There are insane value there. This is a major issue for most companies. I have seen millions spent into getting better dashboards from SAP which paid for themselves in actual savings.

            • jodrellblank a day ago

              You know what they were doing and what tools they were using… how?

              • ktallett a day ago

                Ok take Transcription, they were trying to use free as in cost tools instead of using software that works efficiently that has been effective for decades now.

        • StopDisinfo910 2 days ago

          Microsoft is absolutely selling them as pa and already selling a lot. I think HNers being mostly software developers live in a bubble when it comes to the reality of what LLMs are actually used for.

          Speedy minutes are absolutely not a template away. Anyone who ever had to write minutes for a complicated meetings knows it’s hard and requires a lot of back and forth for everyone to agree about what was said and decided.

          Now you just turn on Copilot and you get both a transcript and an adequate basis for good minutes. Bonus point: it’s made by a machine so no one complains it has bias.

          Some people here are blind to how useful that is.

          • IanCal a day ago

            There are so many tasks in the world that

            1. Involve a computer

            2. Do not require incredible intelligence

            3. Involve the messiness of the real world enough that you can't write exact code to do it without it being insanely fragile

            LLMs suddenly start to tackle these, and tackle them kind of all at once. Additionally they are "programmed" in just English and so you don't need a specialist to do something like change the tone of the summary or format, you just write what you want.

            Assuming the models never get any smarter or even cheaper, and all we get is neater integrations, I still think this is all huge.

            • ktallett a day ago

              Do you really believe the outlay in terms of computer power is worth it to change the tone of an email? If it never gets better, this is a vast waste of an enormous amount of resources.

              • IanCal a day ago

                That's not what I've talked about them being for, but regardless it depends on the impact surely. If it can show you how someone may misunderstand your point and either help correct it or just show the problem then yes that can easily be worth spending a few cycles on. The additional energy cost of further back and forths caused by a misunderstanding could very easily be higher. At full whack, my GPU draws something like 10x what my monitor does, so fixing something quickly and automatically can easily use less power than doing it automatically.

                Again though, that's not at all what I've talked about.

          • ktallett a day ago

            This is a business practice issue and staff issue, not a meeting minutes issue. I have meetings daily, and have never had this issue. You make it clear what is decided during the meeting, give anyone a chance to query or question, then no one can argue.

            • closewith a day ago

              I can guarantee that if you're this obtuse in real life, the reason you have no problems with meetings minutes is because no-one bothers to meet with you.

              • ktallett a day ago

                You would be wrong. I am actually quite perky, I just don't suffer foolish admin tasks easily. I only have meetings with a goal (not just for the sake of it), and then I simply make sure it is clear what the solution is, no matter whose idea it was. I don't care about being right or wrong in a meeting, I care that we have a useful outcome and it isn't an hour wasted. Having a meeting whereby the outcome of a meeting is unclear is a complete waste of time, and is not solved by tech, it is solved by how your meetings are managed.

                • closewith a day ago

                  > I simply make sure it is clear what the solution is

                  People simply humour you to get around your personality.

                  • ktallett a day ago

                    You can have your opinion on that.

        • rkuodys 2 days ago

          What I see LLMs at this point is simplified input and output solutions with reduced barriers of entry. So application could become more widespread.

          Now that I think of it, maybe this AI era is not electricity, but rather GUI - like the time when Jobs(or whoever) figured out and adopted modern GUI on computers allowing more widespread uses of computer

          • ktallett 2 days ago

            Do they only have reduced barriers of entry if you aren't fussed about the accuracy of the output? If you care that everything works correctly and is factually correct, do you not need the same competency as just doing the task by hand.

          • flir a day ago

            It's a good analogy because the key development does seem to have been the interface. Instead of wrapping it up as a text autocomplete (a la google search), openai wrapped it up as an IM client, and we were off to the races.

    • Voloskaya 2 days ago

      > to anyone that understands the tech it can’t.

      This is a ridiculous take that makes me think you might not « understand the tech » as much as you think you do.

      Is AI useful today ? That depends on the exact use case but overall it seems pretty clear the hype is greater than the use currently. But sometimes I feel like everyone forgets that ChatGPT isn’t even 3 years old, 6 years ago we were stuck with GPT-2 whose most impressive feat was writing a non sense poem about a unicorn, and AlphaGo is not even 10 years old.

      If you can’t see the trend and just think that what we have today is the best we will ever achieve, thus the tech can’t do anything useful, you are getting blinded by contrarianism.

      • vrighter a day ago

        If there is a single objective right answer, the model should output a probability of 1 for it, and 0 for everything else. Ex. If I ask "Is a sphere a curved object?" The one and only answer is "100% yes" not "I am 99% sure it is" (and once in a while actually say it isn't)

        This is pretty much impossible to achieve with current architectures (which aren't all that different to those of old, just bigger). If they did, then they'd be woefully overfitted. They can't be made reliable. Anyone who understands the tech does know this.

        • Voloskaya a day ago

          > Anyone who understands the tech does know this

          Yes, and this is does not mean the technology can never be useful.

          I work everyday with that have false beliefs about a tech, I have a friend that until recently thought there were rivers on the moon, and some believe climate change is a hoax, I often forget things people told me and they have to tell me again.

          Are humans not useful at anything ?

    • jstummbillig a day ago

      I think people are mostly bad at value judgements, and AI is no exception.

      What they naively wished the future was like: Flying cars. What they actually got (and is way more useful but a lot less flashy): Cheap solar energy.

      • aleph_minus_one a day ago

        > What they naively wished the future was like: Flying cars.

        This future is already there:

        We have flying cars: they are called "helicopters" (see also https://xkcd.com/1623/).

        • AstralStorm a day ago

          Oh they're not even close in availability as cars, much harder to operate, much more expensive and tend to fall out of the sky.

          Thank you for providing an example that directly maps to usefulness of ANN in most research though.

          • AngryData a day ago

            Helicopters don't just fall out of the sky, atleast not any different than planes fall out of the sky, they can auto-rotate to the ground without engine power.

            • agurk a day ago

              You are absolutely correct about autorotation and helicopters not falling out of the sky. There is one nuance that the rotor blades still need to be able to rotate for this, and a failed gearbox can prevent that. Anecdotally that feels like the most common cause when I read about another crash in the North Sea.

            • vunderba a day ago

              True but even lowering collective and letting autorotation take over you still hit the ground REALLY hard - enough to sustain injuries to your back and neck in some cases.

              Glide ratios in GA (like highwing cessnas) is much more forgiving assuming you can find a place to put down.

    • guardian5x 2 days ago

      AI looks exactly like NFTs to you? I don't understand what you mean by that. AI already has tons more uses.

      • waldrews 2 days ago

        One is a technical advance as important as anything in human history, realizing a dream most informed thinkers thought would remain science fiction long past our lifetimes, upending our understanding of intelligence, computation, language, knowledge, evolution, prediction, psychology... before we even mention practical applications.

        The other is worse than nothing.

    • edanm a day ago

      > the promises of groundbreaking use cases that will change the world are obviously not materialising and to anyone that understands the tech it can’t.

      In what world is this obviously not materializing? Plenty of people use GenAI for coding, with some claiming we're approaching the level of GenAI being able to automate vast portions of a developer's job.

      Do you think this is wrong and the people saying this (some of them very experienced developers) are simply mistaken or lying?

      Or do you think it's not a big deal?

    • ccppurcell a day ago

      LLMs have blown crypto and nfts off the map, at least for normies like me. I wonder what will blow LLMs off the map?

    • mountainriver a day ago

      Code assistants alone prove this to be false.

    • helloplanets a day ago

      I'd be interested in reading some more from the people you're referring to when talking about experts who understand the field. At least to the extent I've followed the discussion, even the top experts are all over the place when it comes to the future of AI.

      As a counterpoint: Geoffrey Hinton. You could say he's gone off the deep end on a tangent, but I definitely don't his incentive is to make money off of hype. Then there's Yann LeCun saying AI "could actually save humanity from extinction". [0]

      If these guys just out of touch talking heads, who are the new guard people should read up on?

      [0]: https://www.theguardian.com/technology/2024/dec/27/godfather...

    • littlestymaar 2 days ago

      > It’s why it keeps looking exactly like NFT’s and crypto hype cycles to me: Yes the technology has legitimate uses

      AI have legitimate uses, cryptocurrency only has “regulations evasion” and NFT has literally no use at all, though.

      But that's very true that the AI ecosystem is crowded with grifters who feed on baseless hype, and many of them actually come from cryptocurrencies.

  • ergsef a day ago

    Speaking out against the hype is frowned upon. I'm sure even this very measured article about "I tried it and it didn't work for me" will draw negative attention from people who think AI is the Second Coming.

    It's also very hard to prove a negative. If you predict "AI will never do anything of value" people will point to literally any result to prove you wrong. TFA does a good job debunking some recent hype, but the author cannot possibly wade through every hyperbolic paper in every field to demonstrate the claims are overblown.

  • montebicyclelo a day ago

    > then go on to explain how they aren't useful in any field whatsoever

    > Where are the AI-driven breakthroughs

    > are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?

    The obvious example of a highly significant AI-driven breakthrough is Alphafold [1]. It has already had a large impact on biotech, helping with drug discovery, computational biology, protein engineering...

    [1] https://blog.google/technology/ai/google-deepmind-isomorphic...

    • boxed a day ago

      I'm personally waiting for the other shoe to drop here. I suspect that, since nature begins with an existing protein and modifies it slightly, AlphaFold is crazy overfitted to the training data. Furthermore, the enormous success of AlphaFold means that the number of people doing protein structure solving has likely crashed.

      So not only are we using an overfitting model that probably can't handle truly novel proteins, we have stopped actually doing the research to notice when this happens. Pretty bad.

  • swyx 2 days ago

    > Where are the AI-driven breakthroughs? Or even the AI-driven incremental improvements?

    literally last week

    https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...

    • boxed a day ago

      Yea, except that we see "breakthrough" stuff like this all the time, and it almost always is quickly found out that it's fraudulent in some way. How many times are we to be fooled before we catch on and don't believe press releases with massive selection bias?

    • dwroberts 2 days ago

      But it only seems to be labs and companies that also have a vested interest in selling it as a product that are able to achieve these breakthroughs. Which is a little suspect, right?

      • swyx 2 days ago

        too tinfoil hat. google is perfectly happy to spend billions dogfooding their own TPUs and not give the leading edge to the public.

        • dwroberts a day ago

          I’m not saying they’re phoney - just we need to take this stuff with a big pinch of salt.

          The Microsoft paper around the quantum “ breakthrough ” is in a different field, but maybe a good example of why we need to be a little more cautious of research-as-marketing

  • biophysboy a day ago

    An example of an "AI" incremental improvement would be Oxford Nanopore sequencing. They extrude DNA through a nanopore, measure the current, and decode the bases using recurrent neural networks.

    They exist all over science, but they are just one method among many, and they do not really drive hypotheses or interpretations (even now)

  • eschaton 2 days ago

    If they didn’t say that the rah-rah-AI crowd would come for them with torches and pitchforks. It’s a ward against that, nothing more.

    • Sharlin a day ago

      Similar to the way many Trump supporters, when daring to criticize him, feel the need to assert that they still love him and would vote for him again.

      (See, eg. r/LeopardsAteMyFace for examples. It’s fascinating.)

      • NotCamelCase a day ago

        Or any time one dares to criticize Israel for their recent contributions to peace on Earth (wink wink) -- it has to be prefaced with "Let me say that I'm the biggest defender of the Jews and fight against anti-Semitism".

        It's moot.

  • Wilsoniumite 2 days ago

    Some new ish maths has been discovered. It's up to you if this is valid or impressive enough, but I think it's significant for things to come: https://youtu.be/sGCmu7YKgPA?si=EG9i0xGHhDu1Tb0O

    • snodnipper a day ago

      Personally, I have been very pleased with the results despite the limitations.

      Like many (I suspect), I have had several users provide comments that the AI processes I have defined have made meaningful impacts on their daily lives - often saving them double digit hours of effort per week. Progress.

  • isaacfrond a day ago

    The article itself lists as successful, even breakthrough, applications of AI: protein folding, weather forecasting, and drug discovery.

  • nyarlathotep_ a day ago

    > say things like "of course I don't doubt that AI will lead to major discoveries", and then go on to explain how they aren't useful in any field whatsoever?

    This is "paying the toll", otherwise one will be accused of being a "luddite."

  • perlgeek 2 days ago

    > Where are the AI-driven breakthroughs?

    The only thing that seems to live up to the hype is AlphaFold, which predicts protein folding based on amino acid sequences, and of which people say that it actually makes their work significantly easier.

    But, disclaimer, this is only from second-hand knowledge, I'm not working in the field.

    • rafaelmn 2 days ago

      This is another dimension of the problem - what's even considered AI ? AlphaFold is a very specialized model - and I feel the AI boom is driven by hypothesis that general models eventually outperform specialized ones given enough size/data/whatever.

      • _0ffh 2 days ago

        While I hate the apparent renaming of everything ML to "AI", things like AlphaFold would be "narrow AI".

        As to the common idea of having to wait for general AI (AGI) to bring the gains, I have been quite sure since the start of the recent AI hype cycle that narrow AI will have silently transformed much of the world before AGI even hits the town.

      • perlgeek a day ago

        In my head, I just substitute "AI" with "machine learning" or "statistics".

        > and I feel the AI boom is driven by hypothesis that general models eventually outperform specialized ones given enough size/data/whatever.

        I think in the sciences, I'd generally put my money on the specialized models.

        I hope that the hype around AI makes it easier (by providing tooling, platforms, better algorithms, educational materials etc.) to train specialized models.

        Kind of a trickle-down of hype money :-)

        • rafaelmn a day ago

          I expect the opposite - hardware/compute going to be locked up in AGI quests unless the bubble pops and then it gets discounted

  • Konnstann a day ago

    Depending on your definition of AI, a pipeline for drug repurposing my team used was able to identify a therapeutic for a rare disease that was beating the state of the art in every test they threw at it, eventually being given orphan drug designation by the FDA. I doubt this would have happened without machine learning or AI or whatever you want to call it.

    I'm also against "agents as scientists" as a concept for numerous reasons, but deep learning etc has or is leading to breakthroughs.

  • PurpleRamen a day ago

    > Where are the AI-driven breakthroughs?

    Define breakthrough. When is the improvement big enough to count as one?

    Define AI. Are you talking about modern LLM, or is old school ML also in that question?

    I mean Googles AI-company had with AlphaFold and other project quite the impact.

    > Or are we just using AI to remix existing general knowledge

    Is remixing bad? Isn't many science today "just" remixing with slight improvements? I mean, there is a reason why we have theoretical and practical scientists. Doing boring Lab-work and accidentally discovering something exciting is not the only way science is happening. Analysing data and remixing information, building new theories, is also important.

    And don't forget, we don't have AGI yet. Whatever AI is doing today, is limited by what humans are using it for. Another question is, whether LLM is not normalized enough already that we do not see it as very special any more, if it's used somewhere. So we might not even see it if AI has significant impact on any breakthrough.

  • dfxm12 a day ago

    There's hype, then there's the understanding that you never use version 1.0 of a product. In this sense, AI as we know is barely in an alpha version. I think the authors you're referring to understand the marketing around AI overstates its usefulness, but they are hopeful for a future where AI can be helpful.

  • fabian2k 2 days ago

    I suspect that people saying this are avoiding to make broad conclusions based only on the AI tools that exist right now at this moment. So they leave a lot of room for the next versions to improve.

    Maybe too much room, but it's hard to predict if AI tools will overcome their limitations in the near future.

  • nerdponx a day ago

    New numerical computing algorithms are being developed with AI assistance, which probably would not have been discovered otherwise. There was an article here a few days ago about one of those. It's incremental but it's not nothing.

  • rsynnott a day ago

    They do mention that it has been somewhat useful in protein folding.

    > Or are we just using AI to remix existing general knowledge, while making no progress of any sort in any field using it?

    AIUI they are generally not talking about LLMs here.

  • moffkalast 2 days ago

    > AlphaEvolve’s procedure found an algorithm to multiply 4x4 complex-valued matrices using 48 scalar multiplications, improving upon Strassen’s 1969 algorithm that was previously known as the best in this setting. This finding demonstrates a significant advance over our previous work, AlphaTensor, which specialized in matrix multiplication algorithms, and for 4x4 matrices, only found improvements for binary arithmetic.

    > To investigate AlphaEvolve’s breadth, we applied the system to over 50 open problems in mathematical analysis, geometry, combinatorics and number theory. The system’s flexibility enabled us to set up most experiments in a matter of hours. In roughly 75% of cases, it rediscovered state-of-the-art solutions, to the best of our knowledge.

    > And in 20% of cases, AlphaEvolve improved the previously best known solutions, making progress on the corresponding open problems. For example, it advanced the kissing number problem. This geometric challenge has fascinated mathematicians for over 300 years and concerns the maximum number of non-overlapping spheres that touch a common unit sphere. AlphaEvolve discovered a configuration of 593 outer spheres and established a new lower bound in 11 dimensions.

    https://storage.googleapis.com/deepmind-media/DeepMind.com/B...

    (this is an LLM driven pipeline)

    • Ygg2 2 days ago

      That's less LLM and more three projects by DeepMind team.

      And it's far from commercial availability.

      • moffkalast 2 days ago

        Well to me personally it at least proves something that's long been touted as impossible, that the current architecture can in fact do better than all humans at novel tasks, even if it needs a crutch at the moment.

        An LLM-based system now holds the SOTA approach on several math problems, how crazy is that? I wasn't convinced before, but now I guess it won't be many decades before we view making new scientific advancements as viable as winning against stockfish.

        • Ygg2 a day ago

          Yeah, but those rely on setting up the evolve function. And they aren't well guaranteed to be better than humans. They might find you an improvement, but aren't guaranteed to do as shown here[1] (green means better, red means worse, gray means same).

          [1] https://youtu.be/sGCmu7YKgPA?t=480

          • moffkalast a day ago

            Last I checked, humans weren't guaranteed to do anything either.

            • Ygg2 a day ago

              Sure, but they have one thing that makes them better than CPUs, they consume more ;)

  • phpnode a day ago

    it's the new "I love my tesla but here are 15 reasons why it's broken" - if you don't provide platitudes the mob provides pitchforks

  • shusaku 2 days ago

    > Or even the AI-driven incremental improvements?

    You have no idea what you are talking about. Every day there is plenty of research published that used AI to help achieve scientific goals.

    Now LLMs are another matter, and probably a ways off before we reap the benefit beyond day to day programming / writing productivity.

  • apples_oranges 2 days ago

    "AI is a competent specialist in all fields except in mine."

  • chermi a day ago

    Protein structure prediction is a pretty useful tool. There are various 'foundation' models in biology now that are quite useful. I don't know if you want to count those as AI or ML.

    If you're looking for breakthroughs due to AI, they're not going to be obviously attributable to AI I think. Focusing on the biology related foundation models ... The ability to more quickly search through the space of sequences->structure, drugs, and predicted cell states (1) like with the biology based foundation models will certainly lead to some things being discovered/rejected/validated faster.

    I heard about this vevo company recently so it's on my mind. Biology experiments are hard and time consuming, and often hard to exactly replicate across labs. A lot of the data is of the form

    a) start in cell state X (usual 'healthy' normal state) under conditions Y (environment variables like temperature, concentrations, etc.)

    b) introduce some environmental pertubation P like some drug/chemical at some concentration, or maybe multiple pertubations at once

    c) record trajectory or steady/final state of cell.

    This data is largely hidden within published papers in non-standardized format. Vevo is attempting to collate all of those results with considerations for reproducibility into a standard easy-to-use format. The idea being that you can gradually build up a virtual sort of input-output (causal!) model that you can throw ideas for interventions against and see what it thinks would happen. Cells and biology are obviously enormously complicated so it's certainly not going to be 100% accurate/predictive, but my experience with network models in quantitative biology plus their proclaimed results make me pretty confident it's a sound approach.

    Thus approach is clearly "AI" driven (maybe I would call this ML) and if their claims are anything close to reality, this is an incredibly powerful tool for all of academia and industry. You can reduce the search space enormously and target your experiments to cover the areas the virtual model doesn't seem so good, continuously improving it in a sort of crowd-sourced "active learning" manner. A continuously improving experimentally backed causal (w.r.t. perturbations) model of cells has so many applications. Again, i don't think this will directly lead to a breakthrough, but it can certainly make the breakthrough more likely and come faster.

    There are many other examples like this that are some combination of 1) collating+filtering+refining existing data into an accessible easily query-able format

    2) combining data + some physically motivated modeling to yield predictions where there is no data

    3) targeted, informed feedback loop via experiments, simulations, or modeling to improve the whole system where it's known to be weak or where more accuracy is desired.

    Assuming it all stays relatively open, that's undeniably a very powerful model for more effective science.

    And that's just one approach. In physics ML can be use for finding and characterizing phase transitions, as one example. In the world of soft matter/biophysics simulation here's a few ways ML is used:

    a) more efficient generation of configurations (Noe generative models). This is a big one, albeit still kind of early stages. Historically(simplifying), to generate independent samples in the right regions of phase means you need to integrate the system for long enough to hit that region multiple times. So regions of the space separated by rare transitions will take a loooong time to hit multiple times. The solution was simply longer simulations. Now, under some restrictions, you can leverage and augment existing data (including simulation data) to directly generate independent samples in the regions of interest. This is a really big deal.

    b) more efficient, complex and accurate NN force-fields. Better incorporation of many-body and even quantum effects.

    c) more complex simulation approaches via improved pipelines like automated parameterization and discovery of collective variables to more efficiently explore relevant configuration space.

    Again, this is tooling that improves the process of discovery & investigation and thus directly contributes to science. Maybe not in the way you're picturing, but it is happening right now.

    1) vevo https://www.tahoebio.ai/

  • yfontana 2 days ago

    From the article:

    > Besides protein folding, the canonical example of a scientific breakthrough from AI, a few examples of scientific progress from AI include:1

    > Weather forecasting, where AI forecasts have had up to 20% higher accuracy (though still lower resolution) compared to traditional physics-based forecasts.

    > Drug discovery, where preliminary data suggests that AI-discovered drugs have been more successful in Phase I (but not Phase II) clinical trials. If the trend holds, this would imply a nearly twofold increase in end-to-end drug approval rates.

shantnutiwari 12 hours ago

Could it be we are all scared, because if we call the Emperor naked, and 15 years from now someone finds a useful case for AI(even if its completely different to what exists today), everyone will point to our post and say "Hahaha look at those Luddites, didnt even believe AI was real LOL"

  • Ukv 11 hours ago

    The recent high level of funding (Stargate, HUMAIN, ...), seemingly prompted mostly by LLMs, could plausibly be an emperor's new clothes fear of missing out among investors - will have to wait and see how it pans out.

    But for 2010s-era machine learning this article is talking about, I feel it largely already has been validated - from shunned and unfunded at the start of the decade to being the almost universal go-to for any NLP or computer vision task by the end. The article itself lists a few use-cases (protein folding, weather forecasting, drug discovery), and I think it's unlikely you've gone through the day without encountering at least a few more (maybe search engines query-understanding, language translation, generated video captions, OCR, or using a product that was scanned for defects).

    Not that every ML method will work out first try when applied to a new problem, but it's far from the case that we're waiting 15 years hoping for someone to maybe find a use-case for the field.

wrren a day ago

AI companies are hugely motivated to show beyond-human levels of intelligence in their models, even if it means flubbing the numbers. If they manage to capture the news cycle for a bit, it's a boost to confidence in their products and maybe their share price if they're public. The articles showing that these advances are largely junk aren't backed by corporate marketing budgets or the desires of the investor class like the original announcements were.

tonii141 2 days ago

This article addresses the misconception that arises when someone lacks a clear understanding of the underlying mathematics of neural networks and mistakenly believes they are a magical solution capable of solving every problem. While neural networks are powerful tools, using them effectively requires knowledge and experience to determine when they are appropriate and when alternative approaches are better suited.

  • sgt101 a day ago

    I think that while the mathematics of neural networks are clearly completely understood we do not really understand why neural networks behave the way that they do when combined with large amounts of real world data.

    In particular the ability of auto regressive transformer based networks to produce sequences speech while being immutable still shocks me whenever I think about it. Of course, this says as much about what we think of ourselves and other humans as it does about the matrices. I also think that the weather forcasting networks are quite shocking, the compression that they have achieved in modeling the physical system that produces weather is frankly.... wrong... but it obviously does actually work.

    • skydhash a day ago

      You can represent many things with numbers and build an algorithm that does stuff. ML techniques are formulas where some specifics constants are not known yet, so you go through a training phase to find them.

      While combinations of words are infinite, only some makes sense. So there’s a lot of reccurent patterns there. When you take a huge datasets like most of the internet and digital documents. I would be more surprised if the trained model where incapable of producing correct texts as both the it’s overfitted to the grammar and the lexicon. And I believe it’s overfitted to general conversation patterns.

      • sgt101 a day ago

        There is a lot of retrieval in the behaviours of LLM's, but I find it hard to characterize it as overfitted. For example, ask ChatGPT to respond to your questions with grammatically incorrect answers.

  • constantcrying 2 days ago

    This does not apply to PINNs though. They were used and investigated by people deeply knowledgeable about numerics and neural networks, they just totally failed to live up to expectation.

    • tonii141 2 days ago

      "they just totally failed to live up to expectation"

      Because the expectation was too high. If you are aiming for precision, neural networks might not be the best solution for you. That is why generative AI works so well, it doesn’t need to be extremely precise. On the other hand you don't see people use neural networks in system control for cricital processes.

      • sgt101 a day ago

        Apart from self driving and autonomous flying...

        • tonii141 a day ago

          AI is used in scene understanding for those applications, but there is no neural network that is steering the wheel.

bwfan123 a day ago

nice expose of human biases involved, need more of these to balance the hype.

1) Instead of identifying a problem and then trying to find a solution, we start by assuming that AI will be the solution and then looking for problems to solve.

hammer in search of a nail

2) nearly complete non-publication of negative results

survivorship (and confirmation bias)

3) same people who evaluate AI models also benefit from those evaluations

power of incentives (and conflicts therein)

4) ai bandwagon effect, and fear of missing out

social-proof

scuff3d a day ago

"I suspect that scientists are switching to AI less because it benefits science, and more because it benefits them."

This is a huge problem in software, and it's not restricted to AI. So much of what has been adopted over the years has everything to do with making the programmers life easier, but nothing to do with producing better software. AI is a continuation of that.

eviks a day ago

> I found that AI methods performed much worse than advertised.

Lesson learned: don't trust ads

> Most scientists aren’t trying to mislead anyone

More learning ahead, the exciting part of being a scientist!

i_c_b a day ago

I'm probably saying something obvious here, but it seems like there's this pre-existing binary going on ("AI will drive amazing advances and change everything!" "You are wrong and a utopian / grifter!") that takes up a lot of oxygen, and it really distracts from the broader question of "given the current state of AI and its current trajectory, how can it be fruitfully used to advance research, and to what's the best way to harness it?"

This is the sort of thing I mean, I guess, by way of close parallel in a pre-AI context. For a while now, I've been doing a lot of private math research. Whether or not I've wasted my time, one thing I've found utterly invaluable has been the OEIS.org website, where you can just enter sequence of numbers and then search for it to see what contexts it shows up in. It's basically a search engine for numerical sequences. And the reason it has been invaluable is that I will often encounter some sequence of integers, I'll be exploring it, and then when I search for it on OEIS, I'll discover that that sequence shows up in much different mathematical contexts. And that will give me an opening to 1) learn some new things and recontextualize what I'm already exploring and 2) give me raw material to ask new questions. Likewise, Wolfram Mathematica has been a godsend. And it's for similar reasons - if I encounter some strange or tricky or complicated integral or infinite sum, it is frequently handy to just toss it into Mathematica, apply some combination of parameter constraints and Expands and FullSimplify's, and see if whatever it is I'm exploring connects, surprisingly, to some unexpected closed form or special function. And, once again, 1) I've learned a ton this way and gotten survey exposure to other fields of math I know much less well, and 2) it's been really helpful in iteratively helping me ask new, pointed questions. Neither OEIS nor Mathematica can just take my hard problems and solve them for me. A lot of this process has been about me identifying and evolving what sorts of problems I even find compelling in the first place. But these resources have been invaluable in helping me broaden what questions I can productively ask, and it's through something more like a high powered, extremely broad, extremely fast search. There's a way that my engagement with these tools has made me a lot smarter and a lot broader-minded, and it's changed the kinds of questions I can productively ask. To make a shaky analogy, books represent a deeply important frozen search of different fields of knowledge, and these tools represent a different style of search, reorganizing knowledge around whatever my current questions are - and acting in a very complementary fashion to books, too, as a way to direct me to books and articles once I have enough context.

Although I haven't spent nearly as much time with it, what I've just described about these other tools certainly is similar to what I've found with AI so far, only AI promises to deliver even more so. As a tool for focused search and reorganization of survey knowledge about an astonishingly broad range of knowledge, it's incredible. I guess I'm trying to name a "broad" rather than "deep" stance here, concerning the obvious benefits I'm finding with AI in the context of certain kinds of research. Or maybe I'm pushing on what I've seen called, over in the land of chess and chess AI, a centaur model - a human still driving, but deeply integrating the AI at all steps of that process.

I've spent a lot of my career as a programmer and game designer working closely with research professors in R1 university settings (in both education and computer science), and I've particularly worked in contexts that required researchers to engage in interdisciplinary work. And they're all smart people (of course), but the silofication of various academic disciplines and specialties is obviously real and pragmatically unavoidable, and it clearly casts a long shadow on what kind of research gets done. No one can know everything, and no one can really even know too much of anything out of their own specialties within their own disciplines - there's simply too much to know. There are a lot of contexts where "deep" is emphasized over "broad" for good reasons. But I think the potential for researchers to cheaply and quickly and silently ask questions outside of their own specializations, to get fast survey level understandings of domains outside of their own expertise, is potentially a huge deal for the kinds of questions they can productively ask.

But, insofar as any of this is true, it's a very different way of harnessing of AI than just taking AI and trying to see if it will produce new solutions to existing, hard, well-defined problems. But who knows, maybe I'm wrong in all of this.

intended 2 days ago

Verification is at the heart of economic and intellectual activity.

stonemetal12 a day ago

AI for science is industrial scale P hacking. Dump in all the world's knowledge in to an AI and see what falls out.

  • gowld a day ago

    There's an easy fix for that: Choose a smaller (more appropriate) P.

blitzar a day ago

I got fooled by a Ponzi Scheme–here's what it taught me about how to make money.

cadamsdotcom a day ago

Hard to tell between “doesn’t work” and “too early”.

  • mmarian a day ago

    Line needs to be drawn somewhere though, otherwise you could make the same case for crypto and AR/VR.

thrdbndndn 2 days ago

I understand lots of people would (rightfully) say "no shit", but I think it's good to actually describe how it is in details. So kudos to the author.

reify 2 days ago

Extremely diplomatic

Most scientists aren’t trying to mislead anyone, but because they face strong incentives to present favorable results, there’s still a risk that you’ll be misled.

In other words scientists are trying to mislead everyone because there are a lot of incentives; money and professional status to name just two.

A common problem across all disciplines of science.

KurSix 2 days ago

The comparison to the replication crisis is spot on

kumarvvr 2 days ago

Are complex math problems just solvable by LLMs, as a stream of language tokens?

I mean, there ought to be an element of abstract thought, abstract reasoning, abstract inter-linking of concepts, etc, to enable mathematicians to solve complex math theorems and problems.

What am I missing?

  • geremiiah 2 days ago

    LLMs are not involved anywhere. You start with some data. Either simulation data or experimental data. Then you train a model to either learn a time evolution operator or a force field. Then you apply it to more input data, and you visualize the results.

    One typical use case is that the simulation data takes months to generate. So for experimental use cases, it is very slow. So the idea was, to train a model that can learn the underlying physics. The model will be small enough so that inference won't be prohibitively expensive. So you can then use the ML model in lieu of the classical physics based model.

    Where this usually fails is that while ML models can be trained well enough to replicate the training data, they typically fail to generalize well outside of the domain and regime of the training data. So unless your experimental problems are entirely within the same domains and regimes as the training data, your model is of not much use.

    So claims of generalizability and applicability are always dubious.

    Lots of publications on this topic follow the same pattern: conceive of a new architecture or formalism, train an ML model on widely available data, results show that it can reproduce the training data to some extent, mention generalizability in the discussion but never test it.

spwa4 2 days ago

TLDR: AI is like any new method in software engineering. It is not a general solution, and by itself not that useful, only as an addition. Unless an expert human takes a LOT of time to fine-tune the method (ie. automatically selecting what works well in what case, using the best method in almost all cases) it only performs well in a very small subset of cases.

yapyap 2 days ago

Glad to see some people who bought into the nonsense are waking up

toss1 a day ago

>>Most scientists aren’t trying to mislead anyone, but because they face strong incentives to present favorable results, there’s still a risk that you’ll be misled.

>>We also found evidence, once again, that researchers tend not to report negative results, an effect known as reporting bias.

>>But unfortunately, the scientific literature is not a reliable source for evaluating the success of AI in science.

>> One issue is survivorship bias. Because AI research, in the words of one researcher, has “nearly complete non-publication of negative results,” we usually only see the successes of AI in science and not the failures. But without negative results, our attempts to evaluate the impacts of AI in science typically get distorted.

While these biases will absolutely create overconfidence and wasted effort, the fact that there are rapid advances with some clear successes such as protein folding, drug discovery, &weather forecasting, leads me to expect there will be more very significant advances, in no small part because of the massive investment in funds and time to the problem of making AI-based advances.

For exactly the reasons this researcher spent his time and funds to research this, despite his negative results, there was learning, and the effect of millions of people effectively searching & developing will result in more great good advances being found and/or built.

Whether they are worth the total financial & human capital being spent is another question, but I'm expecting that to be also positive

BenFranklin100 2 days ago

The author is a Princeton PhD grad working in physics. Funding for this type of work usually comes from the NSF. NSF is under attack by DOGE, and Trump has proposed slashing the NSF budget by 55%.

A reason used to justify these massive cuts is that AI will soon replace traditional research. This post demonstrates this assumption is likely false.

  • thrdbndndn 2 days ago

    Kinda off-topic, but about the cutting itself:

    I used to work in academia and was involved in NSF-funded programs, so I have mixed feelings about this. Waste and inefficiency are rampant. BTW I'm not talking about failed projects or research that were dimmed "not important", but things like buying million-dollar equipment just to use up the budget, which then sits idle.

    That said, slashing NSF funding by 50% won’t fix any of that. It’ll just shrink both the waste and the genuinely valuable research proportionally. So it's indeed a serious blow to real scientific progress.

    I don’t really have a point, just want to put it here. Also to be fair this kind of inefficiency isn't unique to academia; it’s common anywhere public money is involved.

    • nottorp 2 days ago

      > but things like buying million-dollar equipment just to use up the budget, which then sits idle.

      That's not specific to the US, and it's because of a perverse incentive coming from those who assign the funds.

      If you don't use them they cut your funding for the next cycle.

      • thrdbndndn a day ago

        This is a common saying, and I’m sure there’s some truth to it. But IMHO, the main reason is that PIs just want to use the money.

        Because… why not? It's free money. Having the newest shiny equipment, at the very least, boosts your group’s reputation. Not to mention that straight-up corruption (pocketing funds for personal gain) is not unheard of.

    • Upvoter33 a day ago

      Yeah companies never waste money.

      I’m tired of all the complaining about waste and overhead in academics. Companies waste money all the time…

    • dgb23 a day ago

      Even considering these inefficiencies, which are certainly to be taken seriously, there aren't a lot of things that have such a high ROI as research and education.

    • BenFranklin100 a day ago

      Research is a human endeavor and as all human endeavors it can be improved. But having personally known NSF funded researchers in the past, I can confidently say these are some of the most dedicated, hard-working people you will ever meet. Most of these people work 60-70 hrs a week for half the salary they could get in industry.

      Also, industry is far from perfect too. Think of high-profile failures like the Apple car or Meta’s Metaverse, both of which wasted literally tens of billions of dollars, many times the entire annual NSF budget. Let’s not hold scientists to standards applied no where else.

  • surfingdino 2 days ago

    When politicians get involved in research scientific proof and reason aren't always winning.

    • adastra22 2 days ago

      Politicians have been involved in research for over a century.

    • toolslive 2 days ago

      Didn't a politician invent the internet ?

      • eschaton 2 days ago

        That’s a meme created and spread by pseudo-journalist Declan McCullagh specifically to tar Al Gore in the lead-up to the 2000 election.

        Specifically, Gore said in an interview that he “took the initiative in creating the Internet” by introducing the bill to allow commercial traffic on ARPAnet, which McCullagh twisted in an article to “Al Gote claimed he invented the Internet” in order to to smear him.

        • sundarurfriend a day ago

          Given how close that election turned out to be, this smear campaign likely changed the presidency, and given George WMD Bush's actions, changed the course of the world for the worse in many ways. (For those who were too young or not yet born at the time, these jokes were MASSIVE to the extent that became largely Al Gore was known for, for years after. So it's not much of an exaggeration to say they had a material impact on his perception and hence the votes.)

          Al Gore understood technology, the internet, was a champion for the environment, and it's unbelievable today that he came that close to presidency (and then lost). When people say "we live in the bad timeline", one of the closest good timelines is probably one where this election went differently.

      • TypingOutBugs 2 days ago

        Who?

        • sundarurfriend a day ago

          Al Gore played a big role in getting political (and hence economic) support for the expansion of the Internet.

          https://en.wikipedia.org/wiki/Al_Gore_and_information_techno... :

          > Al Gore, a strong and knowledgeable proponent of the Internet, promoted legislation that resulted in President George H.W Bush signing the High Performance Computing and Communication Act of 1991. This Act allocated $600 million

          > In the early 1990s the Internet was big news ... In the fall of 1990, there were just 313,000 computers on the Internet; by 1996, there were close to 10 million. The networking idea became politicized during the 1992 Clinton–Gore election campaign, where the rhetoric of the information highway captured the public imagination.

          Your parent comment is either joining in on the ridicule side or at least in misquoting:

          > Gore became the subject of controversy and ridicule when his statement, "I took the initiative in creating the Internet", was widely quoted out of context. It was often misquoted by comedians and figures in American popular media who framed this statement as a claim that Gore believed he had personally invented the Internet.[54] Gore's actual words were widely reaffirmed by notable Internet pioneers, such as Vint Cerf and Bob Kahn, who stated, "No one in public life has been more intellectually engaged in helping to create the climate for a thriving Internet than the Vice President."

      • surfingdino a day ago

        Nope. Most of the early core research was done by RAND Corporation.

nathias 2 days ago

people will continue to publish cope articles about how AI is useless, far after superintelligence will be reached

  • vaylian 2 days ago

    citation needed

Workaccount2 a day ago

This is the second article in a week where someone is writing about how "AI" has failed them in their field (here it's physics, the other article was radiology), and in both articles they are using now ancient mid 2010's deep learning NNs.

I don't know if it's intentional, but the word "AI" means different things almost every year now. Its worse than papers getting released with "LLMs unable to do basic math" and then you see they used GPT-3 for the study.