Ask HN: Why hasn't x86 caught up with Apple M series?

418 points by stephenheron 2 days ago

Hi,

My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made, even now it really deals with anything I throw at it. My daily work load is regularly having a Android emulator, iOS simulator and a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

I wanted a new personal laptop, and I was debating between a MacBook Air or going for a Framework 13 with Linux. I wanted to lean into learning something new so went with the Framework and I must admit I am regretting it a bit.

The M1 was released back in 2020 and I bought the Ryzen AI 340 which is one of the newest 2025 chips from AMD, so AMD has 5 years of extra development and I had expected them to get close to the M1 in terms of battery efficiency and thermals.

The Ryzen is using a TSMC N4P process compared to the older N5 process, I managed to find a TSMC press release showing the performance/efficiency gains from the newer process: “When compared to N5, N4P offers users a reported +11% performance boost or a 22% reduction in power consumption. Beyond that, N4P can offer users a 6% increase in transistor density over N5”

I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

To be fair I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

Cheers, Stephen

ben-schaaf 2 days ago

Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

  • jonwinstanley 2 days ago

    A huge reason for the low power usage is the iPhone.

    Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.

    Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

    • twilo a day ago

      Apple purchased Palo Alto Semi which made the biggest difference. One of their best acquisitions ever in my opinion… not that they make all that many of those anyway.

      • simonh a day ago

        > One of their best acquisitions ever in my opinion…

        NeXT? But yes, I completely get what you’re saying, I just couldn’t resist. It was an amazingly long sighted strategic move, for sure.

        • linotype a day ago

          I almost feel like NeXT was a reverse acquisition, like Apple became NeXT with an Apple logo.

          • pjmlp 10 hours ago

            Pretty much so, I would say.

      • nxobject 7 hours ago

        Equally (arguably) importantly, Johny Srouji joined Apple the same year as PA Semi's acquisition - '08 – and led Apple A4. (He previously worked at IBM on POWER7, which is a fascinating switch in market segment.)

    • Cthulhu_ a day ago

      I vaguely remember Intel tried to get into the low power / smartphone / table space at the time with their Atom line [0] in the late 00's, but due to core architecture issues they could never reach the efficiency of ARM based chips.

      [0] https://en.wikipedia.org/wiki/Intel_Atom

      • usr1106 17 hours ago

        Intel and Nokia partnered around 2007 .. 09 to introduce x86 phone SoCs and the required software stack. Remember MeeGo? Nokia engineers were horrified by the power consumption and were convinced it wouldn't work. But Nokia management wanted to go to a dual supplier model instead of just relying on TI at all cost.

        MeeGo proceeded far too slowly and Elop chose his former employers' Windows instead in 2011. Nokia's decline only increased and Intel hired many Nokia engineers.

        Soon Nokia made no phone anymore and Intel did not even manage to make their first mass-selling product.

        ARM-based SoCs were 10 years ahead in power saving. The ARM ecosystem did not make any fatal mistakes, Intel never caught up.

        • pjmlp 10 hours ago

          Symbian was using ARM, though. And no one on Espoo office was that happy with Elop, except for the board members that invited him.

      • aidenn0 a day ago

        I don't think it was core architecture issues. My impression is that over the years their efforts to get into low-power devices never got the full force of their engineering prowess.

        • kimixa a day ago

          I worked for an IP vendor that was in some Atom SoCs (over a decade ago now though) - from what I remember the perf/w was actually pretty competitive for contemporary ARM devices when we supplied the IP, but then took so long to actually end up in products it ended up behind others - other customers were already on the next generation by that point, even if the initial projects started at about the same time. And the atoms were buggy as hell, never had more problems with dumb cache/fabric/memory controller issues.

          To me the Atom team always felt like a dead-end inside intel - everyone seemed to be trying to get in to a different higher-status team ASAP - our engineering contacts often changed monthly, if we even knew who our "contacts" were meant to be at any time. I think any product developed like that would struggle.

    • skeezyboy a day ago

      > and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

      how much silicon did Apple actually create? I thought they outsourced all the components?

      • kube-system a day ago

        Besides Apple's SoCs they also have made dedicated silicon for secure enclaves, wifi, bluetooth, ultra-wideband, and cellular radios, and motion coprocessors.

      • brokencode a day ago

        Outsourced to who? The only companies with the engineers you’d need are the other CPU makers like Intel, AMD, Qualcomm, and Nvidia. And none of them make a CPU as efficient as Apple does.

        • skeezyboy a day ago

          cpu yes, but what about the rest of the iphone?

          • brokencode a day ago

            They design much more in house than any other smartphone brand, except maybe Samsung.

            CPU, GPU, neural processor, image signal processor, U1 chip for device tracking, Secure Enclave for biometrics, a 5G modem (only used in the 16e so far)…

            They don’t manufacture the chips in house of course. They contract that out to TSMC and other companies.

        • ChrisGreenHeur a day ago

          Arm exists, it is unknown how much tech apple gets from Arm.

          • brokencode a day ago

            Arm licenses their designs to everybody. They are okay, but you are never going to make market leading processors by using the Arm designs.

            • skeezyboy a day ago

              The M1 and M2 were beating the best-in-class i7 when they were relased IIRC

              • PaulRobinson a day ago

                Apple took the ARM base design (they licensed it), and then they modified and tweaked it.

                You get the ARM ISA, and compilers that work for ARM will compile to Apple Silicon. It's just that the actual hardware you get, is better than the base design, and therefore beats other ARM processors in benchmarks.

                • stinkbeetle a day ago

                  > Apple took the ARM base design (they licensed it), and then they modified and tweaked it.

                  More likely it was derived from PWRficient, or a clean sheet design that took lessons from it.

                • diffuse_l a day ago

                  It's more than that. They have an unlimited license to arm designs, and can change them as they see fit, since they were an early investors (or something along those lines). Other manufacturers can't get these terms, or if they can, it will be prohibtly expensive

                  • sgerenser a day ago

                    The thing about Apple having a “special license” due to being a partial founder of Arm is an urban legend. They have an architectural license, just like several other companies making custom Arm CPUs do.

                    • brokencode a day ago

                      Yeah, why would ARM prevent other companies from paying more for the better license?

                      All they care about is that companies buy an ARM license, not that they use the boilerplate ARM CPU design.

                      Those designs are there to make it easier for companies to make ARM-based chips who would otherwise never be able to design their own.

                      • kalleboo 20 hours ago

                        Then why are they so shy about granting Qualcomm a license?

                        • brokencode 17 hours ago

                          Qualcomm has a lot of money and Arm wants it. They’re not shy, but greedy.

          • fennecbutt a day ago

            And tsmc (and therefore asml etc), usually apple reserves the newest upcoming node for their own production.

      • giantrobot a day ago

        Apple bought PA Semi a long time ago. They have a significant silicon development group. Their architecture license (they were an early investor in ARM) for ARM means they get to basically do whatever they want using the ARM ISA. The SoCs in pretty much all their devices are designed in-house.

        • ljosifov a day ago

          Were they ARM investors at the time they needed CPU for Newton? Was that before or after e.g. iPaq PDA-s? And latter - was it that it looked that Apple maybe in danger of going under, and then they sold their ARM stake and got a cash injection that way?

          I remember iPaq PDA fondly. Wrote a demo to select a song from a playlist with few thousand author-album-song with voice query. The WiFi add-on was a big plastic "sleeve", that the iPaq slid into, not the other way around. Could run the ASR engine for about whole 10 mins before it drained the battery flat, haha. :-)

          • giantrobot a day ago

            IIRC Apple originally invested in ARM during the development of the Newton. The original Newtons used ARM 610 CPUs. I don't know exactly when they sold their ARM stake but they kept their architecture license.

            The Newton was long before the iPaq, the MessagePad was released in 1993.

            • ljosifov 5 hours ago

              On selling of the ARM stake - asked ChatGPT:

              Q> And latter - was it that it looked that Apple maybe in danger of going under, and then they sold their ARM stake and got a cash injection that way?

              A> And yes. In the late-1990s turnaround, Apple sold down its ARM stake in multiple tranches after ARM’s 1998 IPO, realizing hundreds of millions of dollars that helped shore up finances (alongside the well-known $150 million Microsoft deal in Aug 1997).

        • skeezyboy a day ago

          what about all the components and sensors

          • simonh a day ago

            Apple has bought startups with various technologies like Anobit, that developed advanced flash memory controllers, and have funded development efforts by partners. For example Apple worked hand in glove with Sharp to develop the tech for their 5K display panels. They also now have their own cellular chip designs in some models, in their quest for independence from Qualcomm. That’s all from memory, I’m sure there are many more examples.

    • RossBencina a day ago

      I thought they just acquired P.A. Semi, job done.

      • simonh a day ago

        When they bought PA Semi the company worked on IBM Power architecture chips. It was very much the team Apple was after, not any one particular technology.

      • lstodd a day ago

        that was a part of it, yes.

        but do not forget how focused they (amd/intel, esp in opteron days -- edit) were on the server market.

    • DanielHB a day ago

      I don't think it is so much efficiency of their chips for their hardware (phones) so much as efficiency of their OS for their chips and hardware design (like unified memory).

      • zipityzi a day ago

        It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance. The OS efficiency helps, but even under extreme stress tests like SPEC, the Apple SoCs dominate in perf & power.

        See Lunar Lake on TSMC N3B, 4+4, on-package DRAM versus the M3 on TSMC N3B, 4+4, on-package DRAM: https://youtu.be/ymoiWv9BF7Q?t=531

        The 258V (TSMC N3B) has a worse perf / W 1T curve than the Apple M1 (TSMC N5).

        • jhoechtl a day ago

          > It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance

          Dieselgate?

      • Eric_WVGG a day ago

        I have heard that Apple Silicon chips are designed around the retain-release cycle that goes back to NeXT and is still here today (hidden by ARC compilation), but I don't think that's the whole story. Back when the M1's came out, many benchmarks showed virtualized Windows blowing the doors off of market-equivalent x86 CPUs.

        Also, there's the obvious benefits of being TSMC's best customer. And when you design a chip for low power consumption, that means you've got a higher ceiling when you introduce cooling.

      • waffletower a day ago

        The SoC benefits are being ignored by some people here. Apple doesn't control every piece of software as some here posit, however, OS optimizations and utilization of extra-efficiency cores (though still requiring SoC design they do also need specific OS code support) are part of the performance.

    • jimbokun a day ago

      Textbook Innovator’s Dilemma.

    • alt227 a day ago

      > A huge reason for the low power usage is the iPhone.

      No, the main reason for better battery life is the RISC architecture. PC on ARM architecture has the same gains.

      • BearOso 10 hours ago

        Those PC ARM chips like Snapdragon were designed first and foremost for mobile, too.

      • alt227 a day ago

        Any downvoters care to actually leave me a reply telling me why?

        Im not wrong!

        • tacticalturtle 19 hours ago

          You might find these posts informative:

          https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

          https://chipsandcheese.com/p/why-x86-doesnt-need-to-die

          All instructions across x86 and Arm are being decoded to micro-operations, which are implementation specific. You could have an implementation which prioritizes performance, or an implementation that prioritizes power consumption, regardless of the ISA.

          Decoding instructions, particularly on a modern die, doesn’t consume a significant amount of area or power, even for complicated variable length instructions.

        • ben-schaaf 19 hours ago

          You are wrong. The Snapdragon X Elite is actually a great example, unlike M1 it's performance isn't particularly great and it eats 50W under load. That makes its CPU cores a fair bit less efficient that AMDs even on the same production node. If Apple Silicon didn't exist then you might instead argue that x86-64 is more efficient than ARM.

          If all that's true then why does Snapdragon have better battery life? As I said in my comment the great battery life comes from when the CPU isn't being used. It's everything else around it. That's where AMD is still significantly behind.

        • JustExAWS a day ago

          Because it’s a take thst sounds like someone who has been reading comp.sys.mac.advocacy from 1995 when the PPC vs x86 wars were going on (and when PPC chips were already behind in performance) up through 2005 when Apple gave up and went to Intel.

  • RajT88 a day ago

    > All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

    Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.

    Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.

    • diggan a day ago

      > Apple is vertically integrated and can optimize

      > Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

      So, I was thinking like this as well, and after I lost my Carbon X1 I felt adventurous, but not too adventurous, and wanted a laptop that "could just work". The thinking was "If Microsoft makes both the hardware and the software, it has to work perfectly fine, right?", so I bit my lip and got a Surface Pro 8.

      What a horrible laptop that was, even while I was trialing just running Windows on it. Overheated almost immediately by itself, just idling, and STILL suffers from the issue where the laptop sometimes wake itself while in my backpack, so when I actually needed it, of course it was hot and without battery. I've owned a lot of shit laptops through the years, even some without keys in the keyboard, back when I was dirt-poor, but the Surface Pro 8 is the worst of them all, I regret buying it a lot.

      I guess my point is that just because Apple seem really good at the whole "vertically integrated" concept, it isn't magic by itself, and Microsoft continues to fuck up the very same thing, even though they control the entire stack, so you'll still end up with backpack laptops turning themselves on/not turning off properly.

      I'd wager you could let Microsoft own every piece of physical material in the world, and they'd still not be able to make a decent laptop.

      • 0xffff2 a day ago

        Surprised to hear this. Back in the Surface Pro 4 days, the hardware was great. I made it through college doing 95% of my work on a Surface Pro 4 tablet with the magnetic keyboard and almost always made it through the entire day without having to plug it in.

        • RajT88 a day ago

          My wife swears by her surface pros, and she has owned a few.

          I've had a few Surface Book 2's for work, and they were fine except: needed more RAM, and there was some issue with connection between screen and base which make USB headsets hinky.

      • gowld a day ago

        Apple has been vertically integrate for 50 years. Microsoft has been horizontally integrated for 50 years.

        That's why Apple is good at making a whole single system that works by itself, and Microsoft is good at making a system that works with almost everything almost everyone has made almost ever.

        • kalleboo 20 hours ago

          Microsoft has been vertically integrated for nearly 25 years with the Xbox. I wonder if their internally-siloed nature doesn't allow them to learn from individual teams' success.

          • rerdavies 19 hours ago

            Once very decade or so, they build a thinly disguised low-end PC, and then spend another 8 years shipping a thinly disguised obsolete low-end PC.

            I don't think that really counts as vertical integration.

        • williamDafoe 15 hours ago

          The 2019 Macs were vertically integrated and Apple could do NOTHING good with the Intel PowerPig i9 CPUs. My i9 once once ran down from 100% charge to 0% in 90 mins PLUGGED IN ON 95W CHARGER! I was hosting a meeting. The M1-M4 CPUs forsake multithreading and downclock and this is one of the many ways they save power. Video codecs are particularly power efficient on mobile chips!

          • danielbarla 14 hours ago

            I used a 2019 MacBook Pro for quite a while, and it was my first (and so far only) dip into Apple-land. While I appreciated the really solid build quality, great screen, etc, the battery life was pretty abysmal. We're talking easily under 2 hours if I had to be in a video call, which basically meant taking a charger to any meeting of decent length.

            The 2nd biggest disappointment was when I ran my team's compute-heavy workload locally, expecting blistering performance from the i9, only to find that the CPU got throttled to under 50% (I seem to recall 47%, but my memory is fuzzy), within 6 seconds of starting the workload. And this was essentially a brand new laptop, so it likely wasn't blocked fan intakes. I fail to see the point of putting a CPU in a laptop that your thermal design simply can't handle.

            • ch_sm 8 hours ago

              Yeah I had that same i9 16 inch from 2019. Easily the worst Mac I‘ve ever owned (in 20 years!). Now I‘m on an M2 16 inch an it is night and day.

    • ben-schaaf 19 hours ago

      This is easy to disprove. The Snapdragon X Elite has significantly better battery life than what AMD or Intel offer, and yet it's got the same number of cooks in the kitchen.

      > Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

      Same thing happens in Apple land: https://news.ycombinator.com/item?id=44745897. My Framework 16 hasn't had this issue, although the battery does deplete slowly due to shitty modern standby.

      • williamDafoe 15 hours ago

        X Elite is not better than Ryzen5. Not better at all! Its why i own a hx365 AMD laptop...

        • ben-schaaf 14 hours ago

          Do you have a source on that? From every benchmark I've looked at the X Elite gets similar battery life to Apple Silicon, pretty far ahead of AMD.

    • bitwize a day ago

      Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.

      • reaperducer a day ago

        Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.

        Apple has this. It's called Power Nap. But for some reason, it doesn't cause the same problems reported by people here on HN.

        • diffeomorphism 17 hours ago

          It does cause the same problem but seems to be somewhat less frequent.

        • bitwize 7 hours ago

          It doesn't cause the same problems because Apple's Power Nap is something you have to enable. It's an option for users who find it useful. It's not replacing traditional S3 sleep, wherein virtually everything is unpowered except a trickle to keep the RAM alive. Microsoft is supplanting traditional sleep with Modern Standby. You can disable Modern Standby, but only with registry jiggery-pokery, and Microsoft is pressuring OEMs to remove S3 support altogether.

    • kube-system a day ago

      Also on the HN front page today:

      > Framework 16

      > The 2nd Gen Keyboard retains the same hardware as the 1st Gen but introduces refreshed artwork and updated firmware, which includes a fix to prevent the system from waking while carried in a bag.

      • RajT88 a day ago

        There are some reports of this with Macbooks as well. But my (non-scientific) impression is that a lot more people in Wintel land are seeing it. All of my work laptops, and a few of my personal laptops have done this to me since I started using Windows 10/11.

    • amazingman a day ago

      I remember a time when this was supposed to be Wintel's advantage. It's really strange to now be in a time where Apple leads the consumer computing industry in hardware performance, yet is utterly failing at evolving the actual experience of using their computers. I'm pretty sure I'm not the only one who would gladly give up a bit of performance if it were going to result in a polished, consistent UI/UX based on the actual science of human interface design rather than this usability hellscape the Alan Dye era is sending us into.

    • galad87 a day ago

      macOS is a resource hungry pig, I wouldn't bet too much on it making a difference.

  • aurareturn 2 days ago

      All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
    
    This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

    Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

    Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.

    • yaro330 a day ago

      > Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

      This is false, in cross platform tasks it's on par if not worse than latest X86 arches. As others pointed out: 2.5h in gaming is about what you'd expect from a similarly built X86 machine.

      They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.

      > The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

      May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?

      • aurareturn a day ago

          This is false, in cross platform tasks it's on par if not worse than latest X86 arches.
        
        This is Cinebench 2024, a cross platform application: https://imgur.com/a/yvpEpKF

          They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.
        
        Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

          May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?
        
        Honestly not sure how your statement is relevant.

        [0]https://www.notebookcheck.net/Dell-XPS-13-9350-laptop-review...

        • atwrk a day ago

          This is Cinebench 2025, a cross platform application: https://imgur.com/a/yvpEpKF

          You sure like that table, don't you? Trying to find the source of that blender numbers, I came across many reddit posts of you with that exact same table. Sadly those also don't have a source - the are not from the notebookcheck source.

          • aurareturn a day ago

            The reason why I keep reposting this table is because people post incorrect statements about AMD/Apple so often, often with zero data backing.

            For Blender numbers, M4 Pro numbers came from Max Tech's review.[0] I don't remember where I got the Strix Halo numbers from. Could have been from another Youtube video or some old Notebookcheck article.

            Anyway, Blender has official GPU benchmark numbers now:

            M4 Pro: 2497 [1]

            Strix Halo: 1304 [2]

            So M4 Pro is roughly 90% faster in the latest Blender. The most likely reason for why Blender's official numbers favors M4 Pro even more is because of more recent optimizations.

            Sources:

            [0]https://youtu.be/0aLg_a9yrZk?si=NKcx3cl0NVdn4bwk&t=325

            [1] https://opendata.blender.org/devices/Apple%20M4%20Pro%20(GPU...

            [2] https://opendata.blender.org/devices/AMD%20Radeon%208060S%20...

            • vient a day ago

              Weren't we comparing CPUs though? Those Blender benchmarks are for GPUs.

              Here is M4 Max CPU https://opendata.blender.org/devices/Apple%20M4%20Max/ - median score 475

              Ryzen MAX+ PRO 395 shows median score 448 (can't link because the site does not seem to cope well with + or / in product names)

              Resulting in M4 winning by 6%

              • aurareturn a day ago

                  Weren't we comparing CPUs though? Those Blender benchmarks are for GPUs.
                
                Yes, but I was asked about Blender GPU.

                Blender CPU tasks are highly parallel. AMD's Ryzen Max 395 has great MT performance. It's generally 5-20% slower in CPU MT than the M4 Max depending on the application.

        • yaro330 a day ago

          > Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

          And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.

          > Honestly not sure how your statement is relevant.

          How is you bringing up synthetics relevant to race to idle?

          Regardless, a number of things can be done on Strix Halo to improve the performance, first would be switching to some optimized Linux distro, or at least the kernel. That would claw back 5-20% depending on the task. It would also improve single core efficiency, I've seen my 7945hx drop from 14-15w idle on Windows to about 7-8 on Linux, because Windows likes to jerk off the CCDs non stop and throw the tasks around willy nilly which causes the second CCD and I/O die to never properly idle.

          • aurareturn a day ago

              And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.
            
            Why does it matter that LNL is bad economically? LNL shows that it's definitely possible to achieve same idle or even better idle wattage than Apple Silicon.

              How is you bringing up synthetics relevant to race to idle?
            
            I truly don't understand what you mean.
        • ben-schaaf 16 hours ago

          > This is Cinebench 2024, a cross platform application: https://imgur.com/a/yvpEpKF

          Cool, now compare M1 to AI 340. The AI 340 has slightly better single core and better multi-core. If battery life was all about race to idle like you claim then the AI 340 should be better than the M1.

          See also Snapdragon X Elite, which is significantly slower than the AI 340, uses more power under load, so in total has much less efficient cores, and yet still beats the AI 340 on battery life.

      • williamDafoe 15 hours ago

        I did mips per watt calculations in 2017 and Apple (A10 i think) was 2-3x better than intel. See "how to build a computer" by donald gillies (slideshare slides). I was shocked, i didnt expect this at all!

    • jandrewrogers a day ago

      > Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

      This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.

      ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.

      • aurareturn a day ago

          For high-throughput server software x86 is significantly more efficient than Apple Silicon.
        
        In the server space, x86 has the highest performance right now. Yes. That's true. That's also because Apple does not make server parts. Look for Qualcomm to try to win the server performance crown in the next few years with their Oryon cores.

        That said, Graviton is at least 50% of all AWS deployments now. So it's winning vs x86.

          ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial.
        
        I think you'll have to define what top-end means and what performance engineering means.
        • ksec a day ago

          I dont think the point Amazon uses ARM was about performance but purely cost optimisation. At one point, nearly 40% of Intel's server revenue was coming from Amazon. They just figure it out at their scale it would be cheaper to do it themselves.

          But I am purely guessing ARM has risen their price per core so it makes less financial sense to do a yearly update on CPU. They are also going into Server CPU business meaning they now have some incentives to keep it all to themselves. Which makes the Nvidia moves really smart as they decided to go for the ISA licences and do it by themselves.

          • aurareturn 14 hours ago

            Server CPUs do not win on performance alone. They win on performance/$, LTV/$, etc. That's why Graviton is winning on AWS.

  • rollcat a day ago

    > It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

    I've worked in video delivery for quite a while.

    If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.

    • throwawaylaptop a day ago

      I run Linux Mint Mate on a 10 year old laptop. Everything works fine, but watching YouTube makes my wireless USB dongle mouse stutter a LOT. Basically if CPU usage goes up, mouse goes to hell.

      Are you telling me that for some reason it's not using any hardware acceleration available while watching YouTube? How do I fix it?

      • olyjohn a day ago

        It's probably the 2.4GHz WiFi transmitter interfering with the 2.4GHz mouse transmitter. You probably notice it during YouTube because it's constantly downloading. Try a wired mouse.

        • throwawaylaptop a day ago

          Interesting theory. The wired mouse is trouble free, but I figured that's because of a better sampling rate and less overhead over all. Maybe I'll try a bluetooth mouse or some other frequency, or the laptop on fired Ethernet to see if the theory pans out.

          • Sohcahtoa82 5 hours ago

            > Maybe I'll try a bluetooth mouse

            Bluetooth is also 2.4 Ghz.

        • lostmsu a day ago

          Or just switch to 5GHz or 6GHz range.

      • dismalaf a day ago

        Easiest way is to use Chrome or a Chrome based browser since they bundle codecs with the browser. If you're using Firefox, need to make sure you have the codecs. I know nothing about Mint specifically though to know if they'd automatically install codecs or not.

        • lights0123 a day ago

          You specifically don't want to use the bundled codecs since those would be CPU decode only.

          • dismalaf 20 hours ago

            Straight up false. I have both Chrome and Vivaldi installed on Linux, both have hardware video decoding on OOTB...

            You check it by putting chrome://gpu in the address bar.

        • throwawaylaptop a day ago

          Im using Brave and it seems the enable hardware acceleration box is checked.

  • throwup238 2 days ago

    > All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

    A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.

    • qcnguy 2 days ago

      And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows. A lot of the problems here can't actually be fixed by intel, amd, or anyone designing x86 laptops because getting that level of efficiency requires the ability to strongly lead the app developer community. It also requires highly competent operating system developers focusing on the issue for a very long time, and being able to co-design the operating system, firmware and hardware together. Microsoft barely cares about Windows anymore, the Linux guys only care about servers since forever, and that leaves Apple alone in the market. I doubt anything will change anytime soon.

      • curt15 a day ago

        >And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows.

        What are some examples of power draw savings that Linux is leaving on the table?

        • qcnguy 13 hours ago

          There's no equivalent of AppNap if I recall correctly and drivers often aren't aggressive at shutting down unused devices, or they don't do it at all. Linux has historically had a lot of problems with reliable suspends too.

      • john01dav 2 days ago

        Power efficiency is very important to servers too, for cost instead of for battery life. But, energy is energy. Thus, I suspect that the power draw is in userland systems that are specific to desktop, like desktop environments. Thus, using a simpler desktop environment may be worthwhile.

        • qcnguy 2 days ago

          It's important but not relative to performance. Perf/watt thinking has a much longer history in mobile and laptop spaces. Even in servers most workloads haven't migrated to ARM.

        • umbra07 a day ago

          I assumed the same thing, until I tested my hypothesis. KDE Plasma 6 uses less power on idle than just `Hyprland` (tiling WM) without anything like a notification daemon, idler, status bar, etc.

          • pdimitar 16 hours ago

            Were you able to find out why? This is very interesting and I'd never guess it.

        • mcny 2 days ago

          I used Ubuntu around 2015 - 2018 and got hit with a nasty defect around gnome online accounts integrations (please correct me if the words are wrong here). For some reason, it got stuck in a loop or a bad state on my machine. I have since then decided that I will never add any of my online accounts, Facebook, Google, or anything to Gnome.

      • deaddodo a day ago

        If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient. Just think of the difference dropping A10 offered for memory efficiency.

        “Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

        Too much credit is given to Apple for “owning the stack” and too little attention to legacy x86 cruft that allows you to run classic Doom and Commander Keen on modern machines.

        • fluoridation a day ago

          >If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient.

          Where do you get this from? I could understand that they could get rid of the die area devoted to x86 decoding, but as I understand it x86 and x86-64 instructions get interpreted by the same execution units, which are bitness blind. What makes you think it's x86 support that's responsible for the vast majority of power inefficiency in x86-64 processors?

          • hajile a day ago

            Intel has proposed APX to address this. It does away with some of the 32-bit garbage that complicates design for no good payoff. Most importantly, it increases from 16 to 32 registers and allows 3-register instructions (almost all x86 instructions are 1-register or 2-register instructions). This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

            Reduced I-Cache, uop cache, and decoder pressure would also have a beneficial impact. On the flip side, APX instructions would all be an entire byte longer than their AMD64 counterparts, so some of the benefits would be more muted than they might first appear and optimizing between 16 registers and shorter instructions vs 32 registers with longer instructions is yet another tradeoff for compilers to make (and takes another step down the path of being completely unoptimizable by humans).

            • fluoridation a day ago

              >This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

              Sure, but the topic is optimizing power efficiency by removing support for an instruction set. That aside, if an instruction isn't very performant, it isn't much of an issue per se. It just means it won't get used much and so chip design resources will be suboptimally allocated. That's a problem for Intel and AMD, and for nobody else.

              • rerdavies 19 hours ago

                ARM has THREE instruction sets. (four?) aarch32, aarch64, and various incarnations of Thumb. (A PI 5 supports all three).

                • fluoridation 12 hours ago

                  Okay? x86-64 has like twenty extensions. What's your point?

          • delfinom a day ago

            From what I understood. It's not "32-bit instructions" that are the problem. It's a load of crap associated with those 32-bit processors. There's more to x86 than just the instruction set. Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.

            • fluoridation a day ago

              Before addressing anything else, "software is complicated by having to support legacy stuff" is not a valid argument for removing that support at the hardware level. If a software developer wishes to design their software without that legacy support, that's their prerogative.

              >Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.

              What do you mean by this exactly? Are you talking about hybrid execution like WOW64, or simple multi-platform support like the Linux kernel?

              WOW64 is irrelevant as far as power efficiency is concerned if the user doesn't run any x86 software. If the user is running x86 software, that's a reason not to remove that support.

              Multi-platform support shouldn't have an effect on power efficiency, beyond complicating the design of the system. Saying that the Linux kernel should stop supporting x86 so x86-64 can be more power-efficient is like saying that it should stop supporting... whatever, PowerPC, for that same reason. It's a non sequitur.

              • JustExAWS a day ago

                Removing 32 bit hardware support frees up die space and it frees up storage space and RAM since 32 bit and 64 bit libraries had to be on disk and in memory.

                • fluoridation a day ago

                  They don't use memory if they're not used, but you do save storage. Neither one has any effect on power efficiency, though. None of these savings require the hardware to lose useful features. Microsoft could at any time decide to drop WOW64.

                  Saving die space also has no effect on power efficiency, beyond reducing the total transistor count. I'd be very surprised the x86-specific decoding logic makes up a significant area of your typical die. Maybe you'd make the processor 3% more efficient? Something like that?

                  • scarface_74 a day ago

                    If any 32 bit app is launched the shared libraries will be loaded. It’s not a big deal on Macs. But it is a big deal on iPhones.

                    I’m not sure how it works in the modern era. But back in the day there was also a performance cost when you had a mix of 16 bit code and 32 bit code in memory. I don’t know how it would be in 32 bit vs 64 bit.

                    And being able to get away with less RAM also improves battery life because keeping RAM refreshed uses energy - again a bigger factor on mobile.

                    The smaller the die, the less energy it uses. You can also use that space for efficiency cores.

                    • fluoridation a day ago

                      > If any 32 bit app is launched the shared libraries will be loaded.

                      Like I said, if the 32-bit stuff is getting used, that's an argument not remove the support.

                      > And being able to get away with less RAM also improves battery life because keeping RAM refreshed uses energy - again a bigger factor on mobile.

                      Memory allocation exists purely at the software level. The hardware doesn't understand whether a particular region has been allocated or not; the only difference between an allocated page and an unallocated one is that the former appears in the OS's VMM data structure as allocated (i.e. it's just more bits in memory). The power consumption of RAM scales with the total cells installed, not with how much the OS has decided is "in use".

                      • JustExAWS 21 hours ago

                        That’s what I’m saying - iPhones have traditionally come with less memory than Android phones of the Dane generation. Apple has been able to get away with it.

                        So should Apple have also kept the PPC emulator around for Macs?

                        • fluoridation 20 hours ago

                          >That’s what I’m saying - iPhones have traditionally come with less memory than Android phones of the Dane generation. Apple has been able to get away with it.

                          There's no such thing as "how little RAM you can get away with". A user is always better served by having more RAM. The limiting factor is the monetary budget, not the power budget. If a manufacturer puts less RAM on a computer it's only to cut costs, not as a power optimization. Compared to the CPU, RAM is free, power-wise. In fact, since optimizing for space is often in opposition to optimizing for time, having more RAM can save power by saving time spent computing things.

                          >So should Apple have also kept the PPC emulator around for Macs?

                          I'm not really interested in discussing what compatibility support should be included in any given OS. It's not the topic of discussion. The topic of discussion is whether cutting x86 support from x86-64 processors would result in significant power savings, and I maintain that it wouldn't. It would result in at best marginal power savings at the cost of a useful feature.

        • anonymars a day ago

          > “Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

          I'm confused, how is any of this related to "x86" and not the diverse array of third party hardware and software built with varying degrees of competence?

    • stuaxo a day ago

      It's a shame they are so bad at upstreaming stuff, and run on older kernels (which in turn makes upstreaming harder).

  • prmoustache 2 days ago

    > It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding.

    To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.

    • deaddodo a day ago

      The only browser I’ve ever had issues with enabling video acceleration on Linux is Firefox.

      All the Blink-based ones just work as long the proper libraries are installed and said libraries properly detect hardware support.

      • goneri a day ago

        I run Fedora and for legal reasons, they ship a version that has this problem. Have you tried Mozilla's Flatpak build? I use it instead and it resolves all my problem.

      • int_19h a day ago

        When I enabled HW acceleration on my Linux laptop to see how much it would improve battery life in Linux, my automated test (which is basically just browsing Reddit) would start crashing every 20 minutes or so.

  • koala_man a day ago

    I once saw a high resolution CPU graph of a video playing in Safari. It was completely dead except for a blip every 1/30th of a second.

    Incredible discipline. The Chrome graph in comparison was a mess.

    • novok a day ago

      Safari team explicitly targets perf as a target. I just wish they weren't so bad about extensions and adblock and I'd use it as my daily driver. But those paper cuts make me go back to chromium browsers all the time.

  • mayama 2 days ago

    I disable turbo boost in cpu on linux. Fans rarely start on the laptop and the system is generally cool. Even working on development and compilation I rarely need the extra perf. For my 10yr old laptop I cap max clock to 95% too to stop the fans from always starting. YMMV

    • just6979 9 hours ago

      This is a big reason. Apple tunes their devices to not push the extreme edges of the performance that is possible, so they don't fall off that cliff of inefficiency. Combined with a really great perf/watt, they can run them at "90%" and stay nice and cool and sipping power (relatively), while most Intel/AMD machines are allowed to push their parts to "110%" much more often, which might give them a leg up in raw performance (for some workloads), but runs into the gross inefficiencies of pushing the envelope so that marginal performance increase takes 2-3x more power.

      If you manually go in and limit a modern Windows laptop's max performance to just under what the spec sheet indicates, it'll be fairly quiet and cool. In fact, most have a setting to do this, but it's rarely on by default because the manufacturers want to show off performance benchmarks. Of course, that's while also touting battery life that is not possible when in the mode that allows the best performance...

      This doesn't cover other stupid battery life eaters like Modern Standy (it's still possible to disable it with registry tweaks! do it!), but if you don't need absolute max perf for renders or compiling or whatever, put your Windows or Linux laptop into "cool & quiet" mode and enjoy some decent extra battery.

      It would also be really interesting to see what Apple Silicon could do under some Extreme OverClocking fun with sub-zero cooling or such. Would require a firmware & OS that allows more tuning and tweaking, so it's not going to happen anytime soon, but could actually be a nice brag for Apple it they did let it happen.

  • lenkite 2 days ago

    Hell, Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects. It seems if you want optimal performance and power efficiency, you need to own both hardware and software.

    Looks like general purpose CPUs are on the losing train.

    Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

    • NobodyNada a day ago

      > Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects.

      I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

      x86 was designed long before desktops had multi-core processors and out-of-order execution, so for backwards compatibility reasons the architecture severely restricts how the processor is allowed to reorder memory operations. ARM was designed later, and requires software to explicitly request synchronization of memory operations where it's needed, which is much more performant and a closer match for the expectations of modern software, particularly post-C/C++11 (which have a weak memory model at the language level).

      Reference counting operations are simple atomic increments and decrements, and when your software uses these operations heavily (like Apple's does), it can benefit significantly from running on hardware with a weak memory model.

      • stinkbeetle a day ago

        > I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

        It's not really even the ISA, mainly the implementation. Atomics on Apple cores are 3x faster than Intel (18 cycles back to back latency vs 6). AMD's atomics have 6 cycle latency.

    • aurareturn 2 days ago

        It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
      
      Does Apple optimize the OS for its chips and vice versa? Yes. However, Apple Silicon hardware is just that good and that far ahead of x86.

      Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

      M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.

      • lenkite a day ago

        Not really sure whether it makes a difference, but the Parallel VM is running Windows Pro, while the Windows OS on ASUS Gaming Laptop is running Windows Home.

    • davsti4 a day ago

      > Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

      Or, contribute efficiency updates to popular open projects like firefox, chromium, etc...

    • lelanthran a day ago

      > Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

      Wouldn't it be easier for Intel to heavily modify the linus kernel instead of writing their own stack?

      They could even go as far as writing the sleep utilities for laptops, or even their own window manager to take advantage of the specific mods in the ISA?

      • hajile a day ago

        Intel was working with Nokia to heavily invest into Meego OS until it was killed by Elop+Microsoft.

        If it hadn't been killed, it may have become something interesting today.

    • hoppp a day ago

      [flagged]

  • pzo 2 days ago

    > most of which comes down to using the CPU as little as possible.

    it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.

    • creshal 2 days ago

      Intel stipulated the same under the name HUGI (Hurry Up and Go Idle) about 15 years ago when ultrabooks were the new hot thing.

      But when Apple says it, software devs actually listen.

      • int_19h a day ago

        Peer pressure. When everybody else does it and you don't, your app sticks out like a sore thumb and makes users unhappy.

        The other aspect of it is that paid software is more prevalent in macOS land, and the prices are generally higher than on Windows. But the flip side of that is that user feedback is taken more seriously.

    • ben-schaaf 19 hours ago

      Race to sleep is all about using the CPU as little as possible. Given that the modern AMD chips are faster than Apple M1 this clearly does not account for the disparity in battery life.

    • nikanj a day ago

      And then Microsoft adds an animated news tracker to the left corner of the start bar, making sure the cpu never gets to idle.

  • mrtksn a day ago

    Which also should mean that using that M1 machine with Linux will have Intel/AMD like experience, not the M1 with macOS experience.

    • ben-schaaf 19 hours ago

      Yes and no. The optimizations made for battery life are a combination of software and hardware. You'll get bad battery life on an M1 with Linux when watching youtube without hardware acceleration, but if you're just idling (and if Linux idles properly) then it should be similar to macOS.

  • whatevaa 2 days ago

    Turning down the settings will get you worse experiece, especially if you turn down that they are "mostly idle". Not comparable.

  • sys_64738 2 days ago

    Sounds like death by (2^10)-24) cuts for the x86 architecture.

  • ToucanLoucan a day ago

    I honestly don't see myself ever leaving Macbooks at this point. It's the whole package: the battery life is insane, I've literally never had a dead laptop when I needed it no matter what I'm doing or where I'm at; it runs circles around every other computer I own, save for my beastly gaming PC; the stability and consistency of MacOS, and the underlying unix arch for a lot of tooling, all the way down to the build quality being damn near flawless save for the annoying lack of ports (though increasingly, I find myself needing ports less and less).

    Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.

    The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.

    • AlexandrB a day ago

      > Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure.

      Modern MacBook pros have 2/3 (card reader and HDMI port), and they brought back my beloved MagSafe charging.

      • prewett a day ago

        I was all for MagSafe, but after buying an M2, I realized that the USB-C charging was better. I found the cables came out almost as well as the MagSafe if I stepped on them, but you can plug them in to either side. I seem to always be on the wrong side, so the MagSafe cable has to snake around to the other side.

      • ToucanLoucan a day ago

        No shit! I'm still rocking the M1 Pro for personal and the M2 Air for work so I do have magsafe back for one of them at least, but just USB-C besides that.

        But yeah IMHO there's just no comparison. Unless you're one of those folks who simply cannot fucking stand Mac, it's just no contest.

    • solardev a day ago

      Even the high end ones (Razers, Asus, Surface Books, Lenovos) are mere lookalikes and don't run anywhere as well as the MacBooks. They're hot and heavy and loud and full of driver issues and discrete graphics switching headaches and of course the endless ads and AI spam of modern Windows. No comparison at all...

maxsilver a day ago

> Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)

AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.

  • hajile a day ago

    M4 Pro was a massive step back in perf/watt over M3 Pro. To my knowledge, there aren't any M4 die shots around which has led to speculation that yields on M4 Max were predicted to be really bad, so they made the M4 Pro into a binned M4 Max, but that comes with tradeoffs like much worse leakage current.

    That said Hardware Canucks did a review of the 395 in a mobile form factor (Asus ROG Flow F13) with TDP at 70w (lower than the max 120w TDP you see in desktop reviews). This lower-than-max TDP also gets you closer to the perf/watt sweet spot.

    The M4 Pro scores slightly higher in Cinebench R24 despite being 10P+4E vs a full 16P cores on the 395 all while using something like a 30% less power. M4 Pro scores nearly 35% higher than the single-core R24 benchmark too. 395 GPU performance is than M4 Pro in productivity software. More specifically, they trade blows based on which is more optimized in a particular app, but AMD GPUs have way more optimizations in general and gaming should be much better with an x86 + AMD GPU vs Rosetta 2 + GPU translation layers + Wine/crossover.

    M4 Pro gets around 50% better battery life for tasks like web browsing when accounting for battery size differences and more than double the battery life per watt/hr when doing something simple like playing a video. Battery life under full load is a bit better for the 395, but doing the math, this definitely involves the 395 throttling significantly down from it's 70w TDP.

  • Luker88 9 hours ago

    I see virtually nobody pointing out that apple is consistently using fab nodes that are more advanced than intel/AMD.

    Rule of the thumb is roughly 15% advantage to distribute between power and performance there.

    Catching up while remaining on older nodes is no joke.

    • ksec 8 hours ago

      > using fab nodes that are more advanced than intel/AMD.

      I am usually the one doing it on HN. But it is still not common knowledge after we went from M1 to M4.

      And this thread S/N ratio is already 10 times better than most other Apple Silicon discussions.

      AMD's Max 395+ is N4, or 5nm Class Product.

      The Apple M4 is N3 or 3nm Class product.

  • remify a day ago

    I've got an AMD Ryzen 9 365 processor on my new laptop and I really like it. Huge autonomy and good performance when needed, it's comparable to the M3 version (not the Max).

  • anthonypasq a day ago

    I just recently was trying to buy a laptop and was looking at that chip, but like you said, not available in anything except framework desktops and a weird tablet thats 2.5x expensive as a macbook. Its competitive on paper, but is still completely infeasible at the moment.

    • akvadrako a day ago

      There is only the HP ZBook Ultra G1a.

      Some Chinese companies have also announced laptops with it coming out soon.

    • nodesocket a day ago

      There there a few mini PCs using the 395+. Checkout the Beelink GTR9 Pro AMD Ryzen AI Max+ 395 and GMKtec EVO-X2.

    • bilbo0s a day ago

      Also, you don't realize until you try them out that other issues make running models on the AMD chip ridiculously slow compared to running the same models on an M4. Some of that's software. But a lot is how the chip/memory/neural etc are organized.

      Right now, AMD is not even in the ballpark.

      In fact, the real kick in the 'nads was my fully kitted M4 laptop outperforming the AMD. I just gave up.

      I'll keep checking in with AMD and Intel every generation though. It's gotta change at some point.

  • WinstonSmith84 a day ago

    you can find that processor in the 14" HP Zbook Ultra G1A (which is also Ubuntu certified). There is also the Asus Z13, though I'm not certain it's working well with Linux

  • jeffbee a day ago

    This is not even a remotely accurate characterization of the relative performance of the Ryzen AI Max+ 395 and the Apple M4. I have both an expensive implementation of the former and the $499 version of the latter, and my M4 Mac mini beats the Ryzen by 80% or more in many single-threaded workloads, like browser benchmarks.

fafhnir 2 days ago

I have the same experience here with my MacBook Air M1 from 2020 with 16GB RAM and 512GB SSD. After three years, I upgraded to a MacBook Pro with M3 Pro, 36GB of RAM, and 2TB of storage. I use this as my main machine with 2 displays attached via a TB4 dock.

I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.

The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.

But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.

It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.

Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.

Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.

  • devjab a day ago

    I'm still using my MacBook Air M1 with 8gb of Ram as my personal workhorse. It runs docker desktop and VSC better than my T14 whatever windows machine with 32gb ram. But that is windows and it has a bunch of enterprise stuff running. I assume it would work better with Linux, or even windows without whatever our IT does to control it.

    With Nvidia Now I can even play games on it, though I wouldn't recommend it for any serious gamers.

    • zikduruqe a day ago

      Ha. Same here. My personal MBA M1/8GB just chugs along with whatever I need it to do. I have a T480 32GB linux machine at home that I love, but my M1 just does what I need it to do.

      And at the shop we are doing technology refreshes for the whole dev team upgrading them to M4s. I was asked if I wanted to upgrade my M1 Pro to an M4, and I said no. Mainly because I don't want to have to move my tooling over to a new machine, but I am not bottlenecked by anything on my current M1.

      • elzbardico a day ago

        Man, it's absolutely trivial to migrate your configurations to a new machine.

        • zikduruqe a day ago

          Oh I know. Just lazy and have other things to do than to migrate a machine.

          • elzbardico 9 hours ago

            I understand, it is that for me, as a hardware addict, it is almost personally offensive that someone would refuse an upgrade. I am unsettled and disturbed. :-D

    • commakozzi a day ago

      I'm using Geforce Now on my M1 Air and it's wonderful. Yeah, i'll play competitive multiplayer on dedicated hardware (primarily Xbox Series X because i refuse to own a Windows machine and i'm too lazy for Linux right now -- also, i'm hoping against hope for a real Steam console), but Geforce Now has been wonderful for other things, survival, crafting, MMO, single player RPGs, Cyberpunk, Battlefield, pretty much anything you can deal with a few milliseconds of input latency. To be honest, what they're doing here is wizardry to my dumb brain. The additional latency, to me, just feels like the amount of latency you will get from a controller on an Xbox. However, if you play something that requires very quick input (competitive FPS, for example) AND you're connected to servers through the game with anywhere from 5ms to 100ms+ latency (playing on EU servers, for example), that added latency just becomes too much. I'll say this though: I've played Warzone solo on Geforce Now, connected to a local server with no more than 5ms latency via that connection, and it felt pretty decent. Definitely playable, and i think i got 2nd or 1st in a few of those games, but as soon as it gets over like 15-20ms, you're cooked.

  • alt227 a day ago

    > there is no alternative to a Mac nowadays

    I need to point this out all the time these days it seems, but this opinion is only valid if all you use is a laptop and all you care about is single core perfomance.

    The computing world is far bigger than just laptops.

    Big music/3d design/video editing production suites etc still benefit much more from having workstation PCs with higher PCI bandwidth, more lanes for multiple SSDs and GPUs, and high level multicore processing performance which cannot be matched by Apple silicon.

    • shermantanktop a day ago

      Doesn’t Apple have significant market share for pro music and video editing?

      For studio movies, render farms are usually Linux but I think many workstation tasks are done on Apple machines. Or is that no longer true?

      • alt227 a day ago

        Prosumer, but not pro. Pixar for example are not modelling and animating on Apple Silicon.

        On the video side Vegas Pro is used in a lot of production houses, and it does not run on Apple Silicon at all.

      • dr_kiszonka a day ago

        > Doesn't Apple have significant market share for pro music and video editing?

        I thought so too, but I see a lot more people using non-Apple systems for music production than I expected. I don't know whether I was too influenced by Apple's marketing (computers for creators) or something has changed.

      • nartho a day ago

        Music production is overwhelmingly Apple. It comes from the fact that Protools was Mac only until the late 2000s and Logic Pro, Apple's DAW and alternative to Protools was also very popular and also Mac only. That left Cubase for windows and a few others like Ableton and less popular DAWs like Reaper, fruity loops etc. Today there are a few more options for Windows like Studio One who is very good though

        Add to that the fact that most of the audio interfaces were firewire and plug and play on mac and a real struggle on windows. With windows you also had to deal with ASIO, and once you picked your audio interface it has to be used for both inputs and outputs (still to this day) forcing you to compound interfaces with workarounds like Asio4All if you wanted to use different interfaces, while Mac os just lets you pick different interfaces for input and output

        Linux had very interesting projects, unfortunately music production relies on a lot of expensive audio plugins that a lot of time come in installers and are a pain in the butt to use through proton/wine, when it's possible at all. That means that doing music production on Linux means possibly not using plugins you paid and not finding alternatives to them. It's a shame because I'd love to be able to only use Linux

        • philistine a day ago

          > most of the audio interfaces were firewire

          With Apple removing Firewire support this Fall, and so many devices still plugging along in so many studios, I wonder what's going to happen this fall.

        • alt227 a day ago

          > That left Cubase for windows

          When I was at music college doing production courses, they exclusively taught Cubase on windows.

          • nartho a day ago

            Yes, for a while that was the only "serious" option for Windows

            • alt227 a day ago

              Yes, and Logic Pro was generally looked at as 'My first DAW' in most studios I have been in.

              Also Protools was available on Windows from 1997 and was used in many PC based studios.

              • nartho a day ago

                I remember Logic Pro becoming quite popular after version 8, even though veterans who knew protools backwards had no reason to switch, a lot of the newer studios used logic.

                You're right about protools on Windows. I got confused about protools not requiring the use of their own interfaces

  • diggan a day ago

    > Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.

    I guess I'd slightly change that to "MacBook" or similar, as Apple are top-in-class when it comes to laptops, but for desktop they seem to not even be in the fight anymore, unless reducing power consumption is your top concern. But if you're aiming for "performance per money spent", there isn't really any alternative to non-Apple hardware.

    I do agree they do the best hardware in terms of feeling though, which is important for laptops. But computing is so much larger than laptops, especially if you're always working in the same place everyday (like me).

    • int_19h a day ago

      Mac Studio is pretty good on everything except raw GPU speed. Which depending on your use cases may be completely irrelevant.

      • pdimitar 11 hours ago

        I don't disagree but Mac Studio is also way too expensive. I can build a professional Linux workstation for 70% of the price of a non-minimal Studio, and I'll get a lot of goodies in the package (and future-proofed configuration too).

BirAdam a day ago

First, Apple did an excellent job optimizing their software stack for their hardware. This is something that few companies have the ability to do as they target a wide array of hardware. This is even more impressive given the scale of Apple's hardware. The same kernel runs on a Watch and a Mac Studio.

Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.

In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.

  • zipityzi a day ago

    This seems mostly misinformed.

    1) Apple Silicon outperforms all laptop CPUs in the same power envelope on 1T on industry-standard tests: it's not predominantly due to "optimizing their software stack". SPECint, SPECfp, Geekbench, Cinebench, etc. all show major improvements.

    2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.

    3) x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.

    4) Large buffers, L1, L2, L3, caches, etc. are not exclusive to any CPU microarchitecture. Anyone can increase them—the question is, how much does your core benefit from larger cache features?

    5) Ryzen AI Max 300 (Strix Halo) gets nowhere near Apple on 1T perf / W and still loses on 1T perf. Strix Halo uses slower CPUs versus the beastly 9950X below:

    Fanless iPad M4 P-core SPEC2017 int, fp, geomean: 10.61, 15.58, 12.85 AMD 9950X (Zen5) SPEC2017 int, fp, geomean: 10.14, 15.18, 12.41 Intel 285K (Lion Cove) SPEC2017 int, fp, geomean: 9.81, 12.44, 11.05

    Source: https://youtu.be/2jEdpCMD5E8?t=185, https://youtu.be/ymoiWv9BF7Q?t=670

    The 9950X & 285K eat 20W+ per core for that 1T perf; the M4 uses ~7W. Apple has a node advantage, but no node on Earth gives you 50% less power.

    There is no contest.

    • BirAdam a day ago

      1. Apple’s optimizations are one point in their favor. XNU is good, and Apple’s memory management is excellent.

      2. X86 micro-ops vs ARM decode are not equivalent. X86’s variable length instructions make the whole process far more complicated than it is on something like ARM. This is a penalty due to legacy design.

      3. The OP was talking about M1. AFAIK, M4 is now 10-wide, and most x86 is 6-wide (Ryzen 5 does some weird stuff). X86 was 4-wide at the time of M1’s introduction.

      4. M1 has over 600 reorder buffer registers… it’s significantly larger than competitors.

      5. Close relative to x86 competitors.

      • 1000100_1000101 a day ago

        > 4. M1 has over 600 reorder buffer registers… it’s significantly larger than competitors.

        And? Are you saying neither Intel nor AMD engineers were able to determine that this was a bottleneck worth chasing? The point was, anybody could add more cache, rename, reorder or whatever buffers they wanted to... it's not Apple secret-sauce.

        If all the competition knew they were leaving all this performance/efficiency on the table despite there being a relatively simple fix, that's on them. They got overtaken by a competitor with a better offering.

        If all the competition didn't realize they were leaving all this performance/efficiency on the table despite there being a relatively simple fix, that's also on them. They got overtaken by a competitor with better offering AND more effective engineers.

    • phkahler a day ago

      >> x86 is now six-wide, eight-wide, or nine-wide (with asterisks) for decode width on all major Intel & AMD cores. The myth of x86 being stuck on four-wide has been long disproven.

      From the AMD side it was 4 wide until Zen 5. And now it's still 4 wide, but there is a separate 4-wide decoder for each thread. The micro-op cache can deliver a lot of pre-decoded instructions so the issue width is (I dunno) wider but the decode width is still 4.

    • hajile a day ago

      2. uops are a cope that costs. That uop cache and cache controller uses tons of power. ARM designs with 32-bit support had a uop cache, but they cut it when going to 64-bit only designs (look at ARM a715 vs a710) which dramatically reduced frontend size and power consumption.

      3. The claim was never "stuck on 4-wide", but that going wider would incur significant penalties which is the case. AMD uses two 4-wide encoders and pays a big penalty in complexity trying to keep them coherent and occupied. Intel went 6-wide for Golden Cove which is infamous for being the largest and most power-hungry x86 design in a couple decades. This seems to prove the 4-wide people right.

      4. This is only partially true. The ISA impacts which designs make sense which then impacts cache size. uop cache can affect L1 I-cache size. Page size and cache line size also affect L1 cache sizes. Target clockspeeds and cache latency also affect which cache sizes are viable.

    • naasking a day ago

      > 2) x86 also heavily relies on micro-ops to greatly improve performance. This is not a "penalty" in any sense.

      It's an energy penalty, even if wall clock time improves.

    • Der_Einzige a day ago

      A whole lot of bluster in this thread but finally someone whose actually doing their research chimes in. Thank you for giving me a place to start in understanding why this is so deeply a mystery!

  • jcranmer a day ago

    > Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.

    Can we please stop with this myth? Every superscalar processor is doing the exact same thing, converting the ISA into the µops (which may involve fission or fusion) that are actually serviced by the execution units. It doesn't matter if the ISA is x86 or ARM or RISC-V--it's a feature of the superscalar architecture, not the ISA itself.

    The only reason that this canard keeps coming out is because the RISC advocates thought that superscalar was impossible to implement for a CISC architecture and x86 proved them wrong, and so instead they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.

    • adwn a day ago

      > they pretend that it's only because x86 somehow cheats and converts itself to RISC internally.

      Which hasn't even been the case anymore for several years now. Some µOPs in modern x86-64 cores combine memory access with arithmetic operations, making them decidedly non-RISC.

  • rerdavies 18 hours ago

    The "RISC" thing is an experiment that failed in the 90s. There's nothing particularly RISCy about the ARM instruction set. It is a pretty darned complicated instruction set.

    ARM processors ALSO decode instructions to micro-ops. And Apple chips do too. Pretty much a draw. The first stage in the execution pipelines of all modern processors is a a decode stage.

stego-tech a day ago

There’s a number of reasons, all of which in concert create the appearance of a performance gap between the two:

* Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.

* Apple was willing to throw out legacy support on a regular basis. Intel and AMD, by comparison, are still expected to run code written for DOS or specific extensions in major Enterprises, which adds to complexity and cost

* The “standard” of x86 (and demand for newly-bolted-on extensions) means effort into optimizations for efficiency or performance meet diminishing returns fairly quickly. The maturity of the platform also means the “easy” gains are long gone/already done, and so it’s a matter of edge cases and smaller tweaks rather than comprehensive redesigns.

* Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.

It boils down to comparing two different products and asking why they can’t be the same. Apple’s hardware is purpose-built for its userbase, operating systems, and software; x86 is not, and never has been. Those of us who remember the 80s and 90s of SPARC/POWER/Itanium/etc recall that specialty designs often performed better than generalist ones in their specialties, but lacked compatibility as a result.

The Apple ARM vs Intel/AMD x86 is the same thing.

  • shermantanktop a day ago

    Intel chose and stuck with backcompat as a strategy. They could, tomorrow, split their designs into legacy hardware and modern hardware. They didn’t, but Apple has done breaking generational change many times.

    Apple also has a particular advantage in owning the os and having the ability to force independent developers to upgrade their software, which make incompatible updates (including perf optimizations) possible.

    • spixy a day ago

      Intel also wanted to break backcompat and start fresh with Itanium but it failed.

      • shermantanktop a day ago

        So they abandoned it. Meanwhile Apple has powered through that problem how many times?

        The price of Apple’s approach is that 3p developers have to dance to Apple’s tune. And that’s a tough road, as evidenced by the small set of really successful companies which have bet the farm on Apple.

  • PaulRobinson a day ago

    Fair enough, but Apple Silicon is not a specialist chip in the way a SPARC chip was. It's a general purpose SoC & SiP stack. There is nothing stopping Intel being able to invest in SoC & SiP and being able to maintain backward compatibility while providing much better power/performance for a mobile (including laptop and tablet), product strategy.

    They could also just sit down with Microsoft and say "Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do".

    Apple did this twice in the last 20 years - once on the move from PowerPC chips to Intel, and again from Intel to Apple Silicon.

    If Microsoft and enough large OEMs (Dell, etc.), thought there was enough juice in the new proposed architecture to cause a major redevelopment of everything from mobile to data centre level compute, they'd line right up, because they know that if you can significantly reduce the amount of power consumption while smashing benchmarks, there are going to long, long wait times for that hardware and software, and its pay day for everyone.

    We now know so much more about processor design, instruction set and compiler design than we did when the x86 was shaping up, it seems obvious to me that:

    1. RISC is a proven entity worth investing in

    2. SoC & SiP is a proven entity worth investing in

    3. Customers love better power/performance curves at every level from the device in their pocket to the racks in data centres

    4. Intel is in real trouble if they are seriously considering the US government owning actual equity, albeit proposed as non-voting, non-controlling

    Intel can keep the x86 line around if they want, but their R&D needs to be chasing where the market is heading - and fast - while bringing the rest of the chain along with them.

    • alt227 a day ago

      > Right, we're going to go in an entirely different direction, and provide you with something absolutely mind-blowing, but we're going to have to do software emulation for backward compatibility and that will suck for a while until things get recompiled, or it'll suck forever if they never do

      For an example of why this doesnt work, see 'Intel Itanium'.

      • PaulRobinson a day ago

        That's because the direction they took was awful. That does not mean other directions do not exist right now that they could raise money for and invest in.

        The alternative is death - they do nothing, they're going to die.

        Which option do you think they should take?

        • alt227 a day ago

          > The alternative is death - they do nothing, they're going to die.

          Thats a subjective opinion. Plenty of people still value higher power multi core chips over apple silicon, because they are still better at doing real work. I dont think they need to go in a new direction personally, but I was just showing an example of why your provided solution is not a silver bullet.

  • lokar a day ago

    It’s a bit unfair to say apple threw out backwards compatibility.

    Each time they had a pretty good emulation story to keep most stuff (certainly popular stuff) working through a multi-year transition period.

    IMO, this is better then carrying around 40 years of cruft.

    • layer8 a day ago

      This was absolutely not the case for 32-bit iOS apps, which they dropped from one year to the next like a hot potato. I still mourn the loss of some of the apps.

    • alt227 a day ago

      Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further. This in turn means you cannot install new software as the applications themselves require the newer versions of the OS. It has been this way on apple hardware for decades, and has laid the foundation of not ever needing to provide backwards compatibility for more than a few years as well as forcing new hardware purchases. The 'emulation story' only needs to work for a couple of generations, then it itself can be sunsetted and is not expected to be backwards compatible with newer OSes. It is also the reason it is pretty much impossible to upgrade CPUs in Apple machines.

      > IMO, this is better then carrying around 40 years of cruft.

      Backwards compatibility is such a strong point, it is why windows survives even though it has become a bloated ad riddled mess. You can argue which is better, but that seriously depends on your requirements. If you have a business application coded 30 years ago on x86 that no developer in your company understands any more, then backwards compatibility is king. On the other end of the spectrum if you are happy to be purchasing new software subscriptions constantly and having bleeding edge hardware is a must for you, then backwards compatibility probably isnt required.

      • mxey a day ago

        > Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further.

        A new major version of macOS comes out every year. The oldest Mac still supported by the upcoming macOS 26 is from 2019.

        • rerdavies 18 hours ago

          Depends on the OS version. I had a Mac Mini blown away after three years.

        • alt227 a day ago

          Wow, 6 years!

      • commakozzi a day ago

        > Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further

        "oh a post about Apple, let me come in and share my hatred for Apple again by outright lying!"

        As stated already, macOS 26 runs on the M1 and even the 2019 Macbook Pro. So i think i know where you got the "3 new versions" figure, and it's a dark and smelly place.

        • alt227 a day ago

          Apologies I was under the impression that the major OS release was every 2 years, and so I equated 6 years into 3 releases. No need to be quite so rude when you could just factually correct.

          However My parents 2017 Macbook pro can only upgrade to Ventura, which is a 2022 release. 5 years and that $2.5k baby was obselete. However rude you are about your defense of Apple, 5-6 years until software starts being unable to install is pretty shitty. I use 30 year old apps daily on windows with no issue.

          Looks like defending Apple is the smelly place to be judging by your tone and condescending snark.

          • commakozzi a day ago

            If you truly believed major releases were every 2 years, then i apologize, but i thought my "objectionable commentary" was fairly light on the snark. It's quite trendy to hate on Apple, so i assumed you were one of those. I don't honestly care, but what i do care about is when people lie about things to try to make a point. It's been happening more often lately it seems and i quickly respond when i think i see it.

          • filoeleven a day ago

            > My parents 2017 Macbook pro can only upgrade to Ventura, which is a 2022 release. 5 years and that $2.5k baby was obselete.

            Meanwhile, in [Windows land], > Microsoft has provided the minimum and feature-specific device specifications required for upgrading to Windows 11. A number of devices will meet these requirements, however devices with legacy BIOS or without a Trusted Platform Module (TPM 2.0) are not compatible for the upgrade.

            > Microsoft also provided a full list of supported Intel processors; however this loosely translates to compatibility with Intel's 8th-generation processors and newer, meaning devices produced within the last 6-7 years have a high chance of being compatible.

            Sure looks like Apple's support of old machines is in line with Windows here.

            [Windows land] https://www.rm.com/blog/2024/may/a-surprising-number-of-pcs-...

          • lokar a day ago

            It's a real trade-off.

            I don't really know why windows is so very very bad, I just assume it has a lot to do with all the compatibility. If that's the case, then I prefer the Apple approach.

      • hollerith a day ago

        >Apple purposely make it so after 3 new versions of the OS you cannot upgrade the OS on the hardware any further.

        This is false.

        • alt227 a day ago

          Apologies I meant 5-6 years, with a release every 2. Turns it its every year so I was wrong,

          • JustExAWS a day ago

            Apple has released every year for the last almost decade

  • olejorgenb a day ago

    > Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.

    But as you mention - they've at multiple times changed the underlying architecture, which surely would render å large part of prior optimizations obsolete?

    > Software in x86 world is not optimized, broadly, because it doesn’t have to be.

    Do ARM software need optimization more than x86?

    • rerdavies 18 hours ago

      > Do ARM software need optimization more than x86?

      If Windows-World software developers could one day announce that they will only support Intel Gen 14 or later (and not bother with AMD at all), and only support the latest and greatest NVidia GPUs (and only GPUs that cost $900 or more), I'm pretty sure they would optimize their code differently, and would sometimes get dramatic performance improvements.

      It's not so much that ARM needs optimizations more, but that x86 software can't practically be broadly optimized.

  • brookst a day ago

    That sure sounds more like the reality of a performance gap than the appearance of one.

    • jayd16 a day ago

      The broader audience/apples to oranges bit is fair. We're not choosing apple hardware for server. x64 is still dominant on the server with some cheap custom arm chips as an option, no?

      • brookst a day ago

        Sure, but that’s very different than the context f the original question.

  • novok a day ago

    I don't think backcompat is that big of a deal, since old DOS programs also don't take any compute power to run too and apple has shown layers like rosetta work fine.

  • Eric_WVGG a day ago

    > Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.

    This is why I get so livid regarding Electron apps on the Mac.

    I’m never surprised by developer-centric apps like Docker Desktop — those inclined to work on highly technical apps tend not to care much about UX — but to see billion-dollar teams like Slack and 1Password indulge in this slop is so disheartening.

  • jayd16 a day ago

    I generally agree but what's Qualcomm's excuse?

    • rerdavies 17 hours ago

      Samsung seems to have some ARM processors that compete favorably with M-class processors.

gettingoverit a day ago

> might be my Linux setup being inefficient

Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.

Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.

Switching to another scheduler, reducing interrupt rate etc. probably help too.

Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.

> Is x86 just not able to keep up with the ARM architecture?

Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.

That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.

  • fnle a day ago

    > x86 is inherently inefficient

    Nothing in x86 prohibits you from an implementation less efficient than what you could do with ARM instead.

    x86 and ARM have historically served very different markets. I think the pattern of efficiency differences of past implementations is better explained by market forces rather than ISA specifics.

  • akho a day ago

    x12 and x6 do not seem plausible. Something is very wrong.

    • loudmax a day ago

      These figures are very plausible. Most Linux distros are terribly inefficient by default.

      Linux can actually meet or even exceed Window's power efficiently, at least at some tasks, but it takes a lot of work to get there. I'd start with powertop and TLP.

      As usual, the Arch wiki is a good place to find more information: https://wiki.archlinux.org/title/Power_management

      • akho a day ago

        Those numbers would imply <1h runtime, or a >50W consumption at idle (for typical battery capacities). That's insane.

        I've used Linux laptops since ~2007, and am well aware of the issues. 12x is well beyond normal.

        • E39M5S62 a day ago

          At least on Thinkpads over the years, I've never seen anything remotely close to that either. I've had my Thinkpad x260 power draw down to 2.5 watts at idle, and around 4 or 5 watts with a browser and a few terminals open. That was back in 2018! With the hot-swappable battery on the back, I could go for 24 hours of active use without concern.

          • akho a day ago

            I get below 5W at idle (ff and emacs open, screen at indoor brightness, wifi on) on my gen11 framework. Going from 8 to 5 required some tinkering.

            I don't think I ever saw 50W at all, even under load; they probably run an Ultra U1xxH, permanently turbo-boosted.

            For some reason. Given the level of tinkering (with schedulers and interrupt frequencies), it's likely self-imposed at this point, but you never know.

    • gettingoverit a day ago

      My CPU is at over 5GHz, 1% load and 70C at the moment. That's in a "power-saving mode".

      If nothing would be wrong, it'd be at something like 1.5GHz with most of the cores unpowered.

      • vient a day ago

        Something is wrong with power governor then. I have an opposite experience, was able to tune Linux on a Core Ultra 155H laptop so it works longer than Windows one. Needed to use kernel 6.11+ and TLP [0] with pretty aggressive energy saving settings. Also played a bit with Intel LPMD [1] but did not notice much improvement.

        [0] https://github.com/linrunner/TLP

        [1] https://github.com/intel/intel-lpmd

        • Chiikawa 19 hours ago

          I also own a 155H laptop using Linux Mint! Would you share your settings with TLP and LPMD? I am not getting not much longer battery life than Windows 11 on it after some tinkering, so seeing somebody else's setup may help a lot. Thanks!

          • vient 10 hours ago

            Won't say I got much longer battery life, and even what I got may be as well explained as "TLP made energy profile management almost as good as on Windows, and then Windows's tendency to get a bunch of junk processes seeping on your battery tipped the scales to favor Linux". Also I ended up switching back to Windows because of never-ending hardware issues with Linux, installing it on 155H back in February 2024 was especially rough but even 6 months later I randomly got Bluetooth not working anymore after Ubuntu update.

            My TLP and LPMD configs: https://gist.github.com/vient/f8448d56c1191bf6280122e7389fc1...

            TLP: don't remember details now, as I recall scaling governor does not do anything on modern CPUs when energy perf policy is used. CPU_MAX_PERF_ON_BAT=30 seems to be crucial for battery savings, sacrificing performance (not too much for everyday use really) for joules in battery. CPU_HWP_DYN_BOOST_ON_BAT=0 further prohibits using turbo on battery, just in case.

            LPMD: again, did not use it much in the end so not sure what even is written in this config. May need additional care to run alongside TLP.

            Also, I used these boot parameters. For performance, I think, beneficial one are *mitigations, nohz_full, rcu*

                quiet splash sysrq_always_enabled=1 mitigations=off i915.mitigations=off transparent_hugepage=always iommu=pt intel_iommu=on nohz_full=all rcu_nocbs=all rcutree.enable_rcu_lazy=1 rcupdate.rcu_expedited=1 cryptomgr.notests no_timer_check noreplace-smp page_alloc.shuffle=1 tsc=reliable
      • akho a day ago

        What is the laptop, and what's it doing?

      • t-3 a day ago

        What p-state driver are you using?

DuckConference 2 days ago

They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.

If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.

  • aurareturn 2 days ago

    Per core, Apple’s Performance cores are no bigger than AMD’s Zen cores. So it’s a myth that they’re only fast and efficient because they are big.

    What makes Apple silicon chips big is they bolt on a fast GPU on it. If you include the die of a discrete GPU with an x86 chip, it’d be the same or bigger than M series.

    You can look at Intel’s Lunar Lake as an example where it’s physically bigger than an M4 but slower in CPU, GPU, NPU and has way worse efficiency.

    Another comparison is AMD Strix Halo. Despite being ~1.5x bigger than the M4 Pro, it has worse efficiency, ST performance, and GPU performance. It does have slightly more MT.

    • chasil 2 days ago

      Is it not true that the instruction decoder is always active on x86, and is quite complex?

      Such a decoder is vastly less sophisticated with AArch64.

      That is one obvious architectural drawback for power efficiency: a legacy instruction set with variable word length, two FPUs (x87 and SSE), 16-bit compatibility with segmented memory, and hundreds of otherwise unused opcodes.

      How much legacy must Apple implement? Non-kernel AArch32 and Thumb2?

      Edit: think about it... R4000 was the first 64-bit MIPS in 1991. AMD64 was introduced in 2000.

      AArch64 emerged in 2011, and in taking their time, the designers avoided the mistakes made by others.

      • daeken a day ago

        There's no AArch32 or Thumb support (A32/T32) on M-series chips. AArch64 (technically A64) is the only supported instruction set. Fun fact: this makes it impossible to run Mario Kart 8 via virtualization on Macs without software translation, since it's A32.

        How much that does for efficiency I can't say, but I imagine it helps, especially given just how damn easy it is to decode.

        • averne_ a day ago

          It actually doesn't make much difference: https://chipsandcheese.com/i/138977378/decoder-differences-a...

          • chasil a day ago

            I had not realized that Apple did not implement any of the 32-bit ARM environment, but that cuts the legs out of this argument in the article:

            "In Anandtech’s interview, Jim Keller noted that both x86 and ARM both added features over time as software demands evolved. Both got cleaned up a bit when they went 64-bit, but remain old instruction sets that have seen years of iteration."

            I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

            Intel really couldn't resist adding instructions with each new chip (MMX, PAE for 32-bit, many more on this shorthand list that I don't know), which are now mostly baggage.

            • theevilsharpie a day ago

              > I still say that x86 must run two FPUs all the time, and that has to cost some power (AMD must run three - it also has 3dNow).

              Legacy floating-point and SIMD instructions exposed by the ISA (and extensions to it) don't have any bearing on how the hardware works internally.

              Additionally, AMD processors haven't supported 3DNow! in over a decade -- K10 was the last processor family to support it.

          • daeken a day ago

            Oh wow, I need to dig way deeper into this but wonderful resource - thanks!

    • Fluorescence a day ago

      > Despite being ~1.5x bigger than the M4 Pro

      Where are you getting M4 die sizes from?

      It would hardly be surprising given the Max+ 395 has more, and on average, better cores fabbed with 5nm unlike the M4's 3nm. Die size is mostly GPU though.

      Looking at some benchmarks:

      > slightly more MT.

      AMD's multicore passmark score is more than 40% higher.

      https://www.cpubenchmark.net/compare/6345vs6403/Apple-M4-Pro...

      > worse efficiency

      The AMD is an older fab process and does not have P/E cores. What are you measuring?

      > worse ST performance

      The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.

      > worse GPU performance

      The AMD GPU:

      14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.

      19% higher 3D Mark

      34% higher GeekBench 6 OpenCL

      Although a much crappier Blender score. I wonder what that's about.

      https://nanoreview.net/en/gpu-compare/radeon-8060s-vs-apple-...

      • aurareturn a day ago

          Where are you getting M4 die sizes from?
        
        M1 Pro is ~250mm2. M4 Pro likely increased in size a bit. So I estimated 300mm2. There are no official measurements but should be directionally correct.

          AMD's multicore passmark score is more than 40% higher.
        
        It's an out of date benchmark that not even AMD endorses and the industry does not use. Meanwhile, AMD officially endorses Cinebench 2024 and Geekbench. Let's use those.

           The AMD is an older fab process and does not have P/E cores. What are you measuring?
        
        Efficiency. Fab process does not account for the 3.65x efficiency deficit. N4 to N3 is roughly ~20-25% more efficient at the same speed.

          The P/E design choice gives different trade-offs e.g. AMD has much higher average single core perf.
        
        Citation needed. Further more, macOS uses P cores for all the important tasks and E cores for background tasks. I fail to see why even if AMD has a higher average ST would translate to better experience for users.

          14.8 TFLOPS vs. M4 Pro 9.2 TFLOPS.
        
        TFLOPs are not the same between architectures.

          19% higher 3D Mark
        
        Equal in 3DMark Wildlife, loses vs M4 Pro in Blender.

          34% higher GeekBench 6 OpenCL
        
        OpenCL has long been deprecated on macOS. 105727 is the score for Metal, which is supported by macOS. 15% faster for M4 Pro.

        The GPUs themselves are roughly equal. However, Strix Halo is still a bigger SoC.

        • vient a day ago

          > TFLOPs are not the same between architectures.

          Shouldn't they be the same if we are speaking about same precision? For example, [0] shows M4 Max 17 TFLOPS FP32 vs MAX+ 395 29.7 TPLOFS FP32 - not sure what exact operation was measured but at least it should be the same operation. Hard to make definitive statements without access to both machines.

          [0] https://www.cpu-monkey.com/en/compare_cpu-apple_m4_max_16_cp...

          • aurareturn a day ago

            M4 Max doesn't even disclose TFLOPS so no clue where that website got the numbers from.

            TFLOPS can't be measured the same between generations. For example, Nvidia often quotes sparsity TFLOPS which doubles the dense TFLOPS previously reported. I think AMD probably does the same for consumer GPUs.

            Another example is Radeon RX Vega 64 which had 12.7 TFLOPS FP32. Yet, Radeon RX 5700 XT with just 9.8 TFLOPS FP32 absolutely destroyed it in gaming.

        • Fluorescence a day ago

          What a waste of time.

          "directionally correct"... so you don't know and made up some numbers? Great.

          AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.

          • aurareturn a day ago

              "directionally correct"... so you don't know and made up some numbers? Great.
            
            I never said it was exactly that size. Apple keeps the sizes of their base, Pro, and Max chips fairly consistent over generations.

            Welcome to the world of chip discussions. I've never taken apart and M4 Pro computer and measured the die myself. It appears no one has on the internet. However, we can infer a lot of it based on previously known facts. In this case, we know M1 Pro's die size is around 250mm2.

              AMD doesn't "endorse benchmarks" especially not fucking Geekbench for multi-core. No-one could because it's famously nonsense for higher core counts. AMD's decade old beef with Sysmark was about pro-Intel bias.
            
            Geekbench is the main benchmark AMD tends to use: https://videocardz.com/newz/amd-ryzen-5-7600x-has-already-be...

            The reason is because Geekbench correlates highly with SPEC, which is the industry standard.

            • Fluorescence a day ago

              Their "main benchmark"? Stop making things up. It's no more than tragic fanboy addled fraud at this point.

              That three-year old press-release refers to SINGLE CORE Geekbench and not the defective multicore version that doesn't scale with core counts. Given AMD's main USP is core counts it would be an... unusual choice.

              AMD marketing uses every other product under the sun too (no doubt whatever gives the better looking numbers)... including Passmark e.g. it's on this Halo Strix page:

              https://www.amd.com/en/products/processors/ai-pc-portfolio-l...

              So I guess that means Passmark is "endorsed" by AMD too eh? Neat.

              • aurareturn a day ago

                The industry has moved past Passmark because it does not correlate to actual real world performance.

                The standard is SPEC, which correlates with with Geekbench.

                https://medium.com/silicon-reimagined/performance-delivered-...

                Every time there is a discussion on Apple Silicon, some uninformed person always brings up Passmark, which is completely outdated.

                • Fluorescence a day ago

                  Enough. You don't know what you are talking about.

                  What's with posting 5 year old medium articles about a different version of Geekbench? Geekbench 5 had different multicore scaling so if you want to argue that version was so great then you are also arguing against Geekbench 6 because they don't even match.

                  https://www.servethehome.com/a-reminder-that-geekbench-6-is-...

                  "AMD Ryzen Threadripper 3995WX, a huge 64 core/ 128 thread part, was performing at only 3-4x the rate of an Intel D-1718T quad-core part, even despite the fact it had 16x the core count and lots of other features."

                  "With the transition from Geekbench 5 to Geekbench 6, the focus of the Primate Labs team shifted to smaller CPUs"

                  • aurareturn a day ago

                    GB6 measures MT the way most consumer applications use MT. GB5 was embarrassingly parallel. It reflects real world usage more.

            • Hikikomori a day ago

              Your source is an article based on someone finding a Geekbench result for a just released CPU and you somehow try to say its from AMD itself and its an endorsed benchmark, huh.

              • aurareturn a day ago

                Those are AMD's marketing slides.

al_borland 2 days ago

I’ve been thinking a lot about getting something from Framework, as I like their ethos around relatability. However, I currently have an M1 Pro which works just fine, so I’ve been kicking the can down the road while worrying that it just won’t be up to par in terms of what I’m used to from Apple. Not just the processor, but everything. Even in the Intel Mac days, I ended up buying a Asus Zephyrus G14, which had nothing but glowing reviews from everyone. I hated it and sold it within 6 months. There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox.

  • jillesvangurp 2 days ago

    I recently upgraded from an M1 mac book pro 15", which I was pretty happy with, to the M4 max pro 16". I've been extremely impressed with the new laptop. The key metric I use to judge performance is build speed for our main project. It's a thing I do a few dozen times per day. The M1 took about four minutes to run our integration tests. I should add that those tests run in parallel and make heavy use of docker. There are close to 300 integration tests and a few unit tests. Each of those hit the database, Redis, and Elasticsearch. The M4 Pro dropped that to 40 seconds. Each individual test might take a few seconds. It seems to be benefiting a lot from both the faster CPU with lots of cores and the increased amount of memory and memory bandwidth. Whatever it is, I'm seriously impressed with this machine. It costs a lot new but on a three year lease, it boils down to about 100 euros per month. Totally worth it for me. And I'm kind of kicking myself for not upgrading earlier.

    Before the M1, I was stuck using an intel core i5 running arch linux. My intel mac managed to die months before the M1 came out. Let's just say that the M1 really made me appreciate how stupidly slow that intel hardware is. I was losing lots of time doing builds. The laptop would be unusable during those builds.

    Life is too short for crappy hardware. From a software point of view, I could live with Linux but not with Windows. But the hardware is a show stopper currently. I need something that runs cool and yet does not compromise on performance. And all the rest (non-crappy trackpad, amazingly good screen, cool to the touch, good battery life, etc.). And manages to look good too. I'm not aware of any windows/linux laptop that does not heavily compromise on at least a few of those things. I'm pretty sure I can get a fast laptop. But it'd be hot and loud and have the unusable synaptics trackpad. And a mediocre screen. Etc. In short, I'd be missing my mac.

    Apple is showing some confidence by just designing a laptop that isn't even close to being cheap. This thing was well over 4K euros. Worth every penny. There aren't a lot of intel/amd laptops in that price class. Too much penny pinching happening in that world. People think nothing of buying a really expensive car to commute to work. But they'll cut on the thing that they use the whole day when they get there. That makes no sense whatsoever in my view.

    • al_borland a day ago

      The M4 was the first chip that tempted me to upgrade from the M1, which I think is the case for most people. At work, I’m at the mercy of the corporate lease. My personal Mac doesn’t get used in a way where I’ll see a major change, so I’m giving it a while longer.

      I’ve actually been debating moving from the Pro to the Air. The M4 is about on par with the M1 Pro for a lot of things. But it’s not that much smaller, so I’d be getting a lateral performance move and losing ports, so I’m going to wait and see what the future holds.

    • zarzavat 2 days ago

      Considering the amount of engineering that goes into Apple's laptops, and compared to other professional tools, 4000 EUR is extremely cheap. Other tradespeople have to spend 10x more.

    • yesnomaybe a day ago

      I'm in the same boat. Still running an MBP M1 Pro 14". Luckily I bought with 32GB in 2021 when it came out so it can run all things docker similar to your setup. I recently ran a production like workload, real stress test, it was the first time I had the fan spinning constantly but it was still responsive and a pleasure to use (and sit next to!) for a few hours.

      I've been window shopping for a couple of months now, have test run Linux and really liking the experience there (played on older Intel hardware). I am completely de-appled software-wise, with the 1 exception of iMessages because of my kids using ipads. But that's really about it. So, I'm ready to jump.

      But so far, all my research hasn't lead to anything where I would be convinced not to regret in the end. A desktop Ryzen 7700 or 9600X would probably suffice, but it would mean I need to constantly switch machines and I'm not sure if I'm ready for that. All mobile non-macs have significant downsides and you can't even try before you buy anywhere typically. So you'd be relying on reviews. But everybody has a different tolerance for changes like track pad haptics, thermals, noise, screen quality etc. So, those reviews don't give enough confidence. I've had 13 Apple years so far. First 5 were pleasant, next 3 really sucked but since Apple silicon I feel I have totally forgotten all the suffering in the non-Apple world and with those noisy, slow Intel Macs.

      I think it has to boil down to serious reasons why the Apple hardware is not fit for one's purpose. Be it better gaming, extreme amount of storage, insane amount of RAM, all while ignoring the value of "the perfect package" and it's low power draw, low noise etc. Something that does not make one regret the change. DHH has done it and so have others, but he switched to Framework Desktop AI Max. So it came with a change in lifestyle. And he also does gaming, that's another good reason (to switch to Linux or dual boot (as he mentioned Fortnite)).

      I don't have such reasons currently. Unless we see hardware that is at least as fast and enjoyable like the M1 Pro or higher. I tried Asahi but it's quite cumbersome with the dual boot and also DP Alt not there yet and maybe never will, so I gave up on that.

      So, I'll wait another year and will see then. I hope I don't get my company to buy me an M4 Max Ultra or so as that will ruin my desire to switch for 10 more years I guess.

  • koiueo 2 days ago

    > There is a level of polish

    Yeah, those glossy mirror-like displays in which you see yourself much better than the displayed content are polished really well

    • crinkly 2 days ago

      Having used both types extensively my dell matte display diffuses the reflections so badly that you can’t see a damn thing. The one that replaced it was even worse.

      I’ll take the apple display any day. It’s bright enough to blast through any reflections.

  • spankibalt 2 days ago

    > "There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox."

    Hah, it's exactly the other way around for me; I can't stand Apple's hardware. But then again I never bought anything Asus... let alone gamer laptops.

    • nik736 2 days ago

      What exactly is wrong with Apple hardware?

      • happymellon 2 days ago

        For me, the keyboards in the UK have an awful layout.

        Not sure why they can follow ANSI in the US but not ISO here. I just have to override the layout and ignore the symbols.

      • spankibalt 2 days ago

        I very much prefer penabled detachables, a much better form factor than the outdated classic laptop, with a focus on general-purpose computing, such as HP's ZBook x2 G4 detachable workstation. The ideal machine would be a second iteration of that design, just updated to be smaller as well as more performant and repairable. Of course that's not gonna happen, as there's, apart from legal issues, no money in it.

        Apple on the other hand doesn't offer such machines... actually never has. To me, prizing maintainability, expandability, modularity, etc., their laptops are completely undesireable even within the confines of their outdated form factor; their efficient performance is largely irrelevant, and their tablets are much too enshittified to warrant consideration. And that's before we get into the OS and eco-system aspects. :)

  • Zanfa 2 days ago

    Most manufacturers just don't give a shit. Had the exact same experience with a well-reviewed Acer laptop a while back, ended up getting rid of it a few months in because of constant annoyances, replaced with a MacBook Air that lasted for many years. A few years back, I got one of the popular Asus NUCs that came without networking drivers installed. I'm guessing those were on the CD that came with it, but not particularly helpful on a PC without a CD drive. The same SKU came with a variety of networking hardware from different manufacturers, without any indication of which combination I had, so trial and error it was. Zero chance non-techy people would get either working on their own.

    • ziml77 a day ago

      My venture outside of MacBooks included a Dell XPS. Supposed to be their high end, and that year's model was well reviewed by multiple sources.... yet I returned it after like a week. The fan would not only run far too often but the sound it made was also atrocious. I have no clue if mine was defective or if all the reviewers are deaf to high frequencies. And the body was so flimsy that I would grab the corner of the laptop to move it and end up triggering a mouse click.

  • Tade0 2 days ago

    I had a 2020 Zephyrus G14 - also bought it largely because of the reviews.

    First two years it was solid, but then weird stuff started happening like the integrated GPU running full throttle at all times and sleep mode meaning "high temperature and fans spinning to do exactly nothing" (that seems to be a Windows problem because my work machine does the same).

    Meanwhile the manufacturer, having released a new model, lost interest, so no firmware updates to address those issues.

    I currently have the Framework 16 and I'm happy with it, but I wouldn't recommend it by default.

    I for one bought it because I tend to damage stuff like screens and ports and it also enables me to have unusual arrangements like a left-handed numpad - not exactly mainstream requirements.

  • crinkly 2 days ago

    I suspect the majority of people who recommend particular x86 laptops have only had x86 laptops. There’s a lot of disparity in quality between brands and models.

    Apple is just off the side somewhere else.

PaulKeeble 2 days ago

I tend to think its putting the memory on the package. Putting the memory on the package has given the M1 over 400GB/s which is a good 4x that on a usual dual channel x64 CPU and the latency is half that of going out to a DRAM slot. That is drastic and I remember when the northbrige was first folded into the CPU by AMD with the Athlon and it had a similarly big improvements in performance. It also reduces power consumption a lot.

The cost is flexibility and I think for now they don't want to move to fixed RAM configurations. The X3D approach from AMD gets a good bunch of the benefits by just putting lots of cache on board.

Apple got a lot of performance out of not a lot of watts.

One other possibility on power saving is the way Apple ramps the clockspeed. Its quite slow to increase from its 1Ghz idle to 3.2Ghz, about 100ms and it doesn't even start for 40ms. With tiny little bursts of activity like web browsing and such this slow transition likely saves a lot of power at a cost of absolute responsiveness.

  • Tuna-Fish 2 days ago

    > and the latency is half that of going out to a DRAM slot.

    No, it's not. DRAM latency on Apple Silicon is significantly higher than on the desktop, mainly because they use LPDDR which has higher latencies.

    • Remnant44 2 days ago

      I was going to mention this as well.

      Source: chipsandcheese.com memory latency graphs

  • RachelF 2 days ago

    A small reason for less power consumption with on die RAM is that you don't need active termination, which does use a few watts of power. It isn't the main reason that the Macs use less power, though.

  • a3w 9 hours ago

    The manufacturing process they use for memory is not a good choice, actually. It is a tradeoff.

  • KingOfCoders a day ago

    Yes, this saves a lot of power and adds performance. But destroys your eco system and annoys a vocal user base. Apple has no eco system and lots of fans, so they are playing their cards right.

  • userbinator 2 days ago

    this slow transition likely saves a lot of power at a cost of absolute responsiveness.

    Not necessarily. Running longer at a slower speed may consume more energy overall, which is why "race to sleep" is a thing. Ideally the clock would be completely stopped most of the time. I suspect it's just because Apple are more familiar with their own SoC design and have optimised the frequency control to work with their software.

  • aurareturn 2 days ago

    Memory bandwidth is not what makes the CPU fast and efficiency. The CPU doesn’t even have access to the full Apple Silicon bandwidth.

    On package memory increases efficiency, not speed.

    However, most of the speed and efficiency advantages are in the design.

    • badc0ffee 4 hours ago

      > On package memory increases efficiency, not speed.

      All the same, the M series chips still have significantly better memory bandwidth. Where is that coming from?

daemonologist 2 days ago

I think this is partially down to Framework being a very small and new company that doesn't have the resources to make the best use of every last coulomb, rather than an inherent deficiency of x86. The larger companies like Asus and Lenovo are able to build more efficient laptops (at least under Windows), while Apple (having very few product SKUs and full vertical integration) can push things even further.

notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.

  • nextos 2 days ago

    Framework is a bit behind the others in terms of cooling, apparently due to compromises needed to achieve modularity. However, a well-tuned Ryzen U in the latest ThinkPads is not that far from M chips in terms of computing power per Watt according to some benchmarks.

    Most Linux distributions are not well tuned, because this is too device-specific. Spending a few minutes writing custom udev rules, with the aid of powertop, can reduce heat and power usage dramatically. Another factor is Safari, which is significantly more efficient than Firefox and Chromium. To counter that, using a barebones setup with few running services can get you quite far. I can get more than 10 hours of battery from a recent ThinkPad.

    • quijoteuniv 2 days ago

      +1 on powertop, i have use it successfully for tunning old macs that I have upcycled with Linux and difference is day & night.

      • danieldk 2 days ago

        powertop helps a lot, I went from 3-4 hours to 6-7 hours on a ThinkPad. That said, it's not something you would want to bother a regular user with. E.g. enabling powertop optimizations will enable USB autosuspend, this will add a delay every darn time you didn't touch your USB keyboard or mouse for a second. So, you end up writing udev rules that excludes certain HID devices (or using different settings for when a laptop is on power or not), etc.

        These are the kinds of optimizations that macOS does out of the box and you cannot expect most Linux users to do (which is one of the reasons battery life is so bad on Linux out-of-the-box).

        • nextos a day ago

          I agree. The trick is to use powertop's suggestions to craft good udev rules, not to enable the powertop optimizations daemon directly. That doesn't work well in many scenarios. Someone should create a udev rule hardware database, or a udev rule generator for laptops and desktops to help common users.

    • Klonoar a day ago

      > using a barebones setup with few running surfaces

      The entire point here is that you can run whatever the hell you want on Apples stuff without breaking a sweat. I shouldn’t have to counter shit.

chvid 2 days ago

I don't think there is a single thing you can point to. But overall Apple's hardware/software is highly optimized, closely knit, and each component is in general the best the industry has to offer. It is sold cheap as they make money on volume and an optimized supply chain.

Framework does not have the volume, it is optimized for modularity, and the software is not as optimized for the hardware.

As a general purpose computer Apple is impossible to beat and it will take a paradigm shift for that for to change (completely new platform - similar to the introduction of the smart phone). Framework has its place as a specialized device for people who enjoy flexible hardware and custom operating systems.

  • john01dav 2 days ago

    > It is sold cheap as they make money on volume and an optimized supply chain.

    What about all the money that they make from abusive practices like refusing to integrate with competitors' products thus forcing you to buy their ecosystem, phoning home to run any app, high app store fees even on Mac OS, and their massive anti repair shenanigans?

    • chvid 2 days ago

      Macs today are not designed to be easily repairably but instead to be lighter and otherwise better integrated - I believe that is consequence of consumer preferences and not shady business practices.

      As for the services - it is a bit off topic as I believe Apple makes a profit on their macs alone ignoring their services business. But in general I have less of a problem with a subscription / fee-driven services business compared to an advertisement-based one. And as for the fee / alternative payment controversy (epic vs apple etc.) this is something that is relevant if you are a big brand that can actually market on your own / build an alternative shop infrastructure. For small time developers the marketing and payment infrastructure the apple app store offers is a bargain.

      • omnimus 2 days ago

        Macbooks are one of the heaviest laptops you can buy. I think they are doing it for the premium feel - it is extremely sturdy. I recently got some random lenovo YOGA for linux to go along side my macbook and it weighs less, is as thin and even has dedicated gpu - while having 2 user replaceable M.2 slots. It is also very sturdy but not as sturdy Macbooks.

        What i am saying is that Apple could for sure fit replaceable drives without any change hit to size or weight. But their Mac strategy is price based on disk size and make repairs expensive so you buy new machine. I don't complain it is the reason why cheapest Macbook Air is the best laptop deal.

        But let's stop this marketing story that it's their engineering genius not their market strategy.

        • weighterjsk a day ago

          >Macbooks are one of the heaviest laptops you can buy.

          I don't think this is even close to true. My last laptop from 2020 weighed at ~2.6kg and it's 2025 counterpart is still at 2.1kg, while my work m1 mac is at 1.3kg

          >. I think they are doing it for the premium feel - it is extremely sturdy

          It's not merely a feel; I've succesfully thrown it to the pavement more than once from ~1.5 meters and it's continued working well, whereas none of my previous laptops have gotten away scot free before from even one drop

          Apple does practice very hard repairability which I agree should be made much more accessible.

        • aurareturn 2 days ago

            Macbooks are one of the heaviest laptops you can buy. I think they are doing it for the premium feel - it is extremely sturdy.
          
          Yes, because of the metal enclosure while nearly all Windows laptop makers use plastic. Macs are usually the thinnest laptops in their class though.
          • queenkjuul a day ago

            My Asus is all metal, thinner, and lighter than my same-screen-size MacBook

            It's also not as robust. But it's definitely thinner and lighter.

      • chvid 2 days ago

        I am pretty sure it is a consequence of consumer preference. I can see it from my own behaviour - I am a power user of all things computing and it has been decades since I upgraded a harddisk.

        • prewett a day ago

          Regarding storage, part of the advantage of their soldered storage is higher access speeds. Combined with the fact that, as you note, people rarely need to upgrade their storage, might as well use the faster storage. This is probably particularly a benefit for the 8 GB machines, which I assume use swap regularly.

    • JustExAWS a day ago

      Exactly what competitors products on the Mac don’t they integrate with? And no serious app distributes through the Mac App Store.

      Both the Mac and the iPhone support standard Bluetooth protocols and USB protocols.

  • larodi 2 days ago

    When one controls the OS and much of the delivery chain, it is not unthinkable to decide to through some billions of $$$ to create a chop optimized to serve exactly your needs.

    So this is precisely what Apple did, and we can argue it was long time in the making. The funny part is that nobody expected x86 to make way for ARM chips, but perhaps this was a corporate bias stemming from Intel marketing, which they are arguably very good at.

  • alt227 a day ago

    > As a general purpose computer Apple is impossible to beat

    Only if all you care about is having a laptop with really fast single core performance. Anything that requires real grunt needs a workstation or server which Apple silicon connot provide.

dlcarrier 2 days ago

That's a Chrome problem, especially on extra powerful processors like Strix Halo. Apple is very strict about power consumption in the development of Safari, but Chrome is designed to make use of all unallocated resources. This works great on a desktop computer, making it faster than Safari, but the difference isn't that significant and it results in a lot of power draw on mobile platforms. Many simple web sites will peg a CPU core even when not in focus, and it really adds up with multiple tabs open.

It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.

The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.

I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.

Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.

I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):

    killall -STOP google-chrome
When you want to go back to using it, run:

    killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.
  • socalgal2 2 days ago

    AFAICT the comparisons to safari are no longer true

    https://birchtree.me/blog/everyone-says-chrome-devastates-ma...

    That might be different on other platforms

    • dlcarrier a day ago

      I can't vouch for Chrome and Safari themselves, but I can between Thorium and Falkon, because I regularly suspend Thorium and open the same page with Falkon, and watch the CPU usage graph drop from pegging a core to almost nothing.

    • NitpickLawyer 2 days ago

      I think the GP is talking about linux specifically. On a Mac I can see that Chrome disables unused tabs (mouse over says "Inactive tab, xxx MB freed up")

      • fullstop a day ago

        I have inactive tabs on Linux, and it shows the same thing.

  • pdimitar 8 hours ago

    > Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so

    Auto Tab Discard exists and works fine, but I am not sure it's what people call "suspending tabs". They need to reload when you click them and they objectively free the memory they used (I watch my memory usage closely).

etler 2 hours ago

Apple has vertical integration between their hardware and operating system meaning they have way more control. They can adapt their software to enable them to optimize their hardware in ways competitors can't.

bsenftner a day ago

Well, there is a major architectural reason why the entire M-series appears to be "so fast" and that is the unified memory, which completely eliminates the buffer-to-buffer data copying that is probably over half of what a non-unified memory architecture chip is doing at any given time. M-series chips have an architecture that completely eliminates data copying, just reference the data where it is, and you're done.

  • rollcat a day ago

    I really like the principles behind AMD's chiplet design, of course they've had different design goals behind it (easier diversification of their product portfolio), but it remains a fact that you can slap a not-so-terrible GPU right next to a CPU core.

    There's probably a lot still missing: Apple integrated the memory on the same die, and built Metal for software to directly take advantage of that design. That's the competitive advantage of vertical integration.

    • kube-system a day ago

      > Apple integrated the memory on the same die

      It's on the same package but not the same die

  • hajile a day ago

    Apple made a big deal about this, but other iGPUs have done this for years.

    • bsenftner a day ago

      It's not just the GPU memory, it's also I/O memory. That speeds up a lot: just update the pointer to where the memory is, no copying out of I/O memory.

ac29 2 days ago

One downside of Framework is they use DDR instead of LPDDR. This means you can upgrade or replace the RAM, but it also means memory is much slower and more power hungry.

Its also probably worth putting the laptop in "efficiency" mode (15W sustained, 25W boost per Framework). The difference in performance should be fairly negligible compared to balanced mode for most tasks and it will use less energy.

  • alt227 a day ago

    However the latency of DDR is much better than LPDDR, so its pros and cons.

  • ptman 2 days ago

    Isn't Ryzen AI (Strix Point?) using similar non-upgradeable LPDDR?

    • Aissen 2 days ago

      Framework does not have any design with those LPDDR packages.

fidotron a day ago

There's a dimension to this people wilfully ignore: the AArch64 design is inspired, especially if you have a team as good as Apple have to execute an implementation of it. And that isn't a one way causality because AArch64 is what it is because of things that the Apple team wanted to do, which has led to their performance advantages today.

I don't think many people have appreciated just how big a change the 64 bit Arm was, to the point it's basically a completely different beast than what came before.

From the moment the iPhone went 64 bit it was clear this was the plan the whole time.

greatgib 16 hours ago

I think that you are wrong to extrapolate anything about the hardware capabilities based on your experience on a few tasks.

Difference are more software related in my opinion. And it might be just appearance as Apple is used to do tricks. Like for example it was shown in a good old time that people were thinking that the iphone was faster to load things because of using animation at load time despite taking the same time as other phones.

For example, for the many tabs in Chrome, the difference might be that macos is aggressively throttling things when your linux laptop will give you the maximum performance possible and so producing more heat. I often noted that with osx and especially when you don't have a lot of ram. The os will easily put to sleep and evict other programs, but also other windows and other tabs I guess as part of them are separated process. Then, when you need them, it reload the memory. Good in term of power efficiency but in my experience I was experiencing terrible latencies like going from one window to another. Let's say like 1s. Not obvious if you are not used to better.

In the same way that a lot of persons are used to electron based ide like vscode and feels perfectly ok, but for me the latency of typing code and it showing on the screen is awful compared to my native ide.

In the same way for macos, you can see how often the laptop will go to sleep, or lower the display light unexpectedly with default settings. Like these guys that suddenly quit Google meet meetings because the mac went to sleep despite the active call.

out_of_protocol 2 days ago

On efficiency side, there's big difference on OS department. Recently released handheld Lenovo Go S has both SteamOS (which is Arch btw) and Windows11 versions, allowing to directly compare efficiency of a AMD's Z1E chip under load with limited TDP. And the difference is huge, with SteamOS fps is significantly higher and and the same time battery lasts a lot more.

Windows does a lot of useless crap in the background that kills battery and slows down user-launched software

mmcnl 2 days ago

A lot of insightful comments already, but there are two other tricks I think Apple is using: (1) the laptops can get really hot before the fans turn on audibly and (2) the fans are engineered to be super quiet. So even if they run on low RPM, you won't hear them. This makes the M-series seem even more efficient than they are.

Also, especially the MacBook Pros have really large batteries, on average larger than the competition. This increases the battery runtime.

  • BlindEyeHalo 2 days ago

    The macbook air doesn't even have a fan. I don't think you could built a fan-less x86 laptop.

    • mkl 2 days ago

      Sure you can. There are a bunch listed in this article: https://www.ultrabookreview.com/6520-fanless-ultrabooks/

      Fanless x86 desktops are a thing too, in the form of thin clients and small PCs intended for business use. I have a few HP T630s I use as servers (I have used them as desktop PCs too, but my tab-hoarding habit makes them throttle a bit too much for my use - they'd be fine for a lot of people).

      • int_19h a day ago

        My experience with fanless Intel is that they tend to be rather sluggish for desktop GUI use, though. Which doesn't seem to be an issue with Macbook Air.

      • mnw21cam 2 days ago

        Do you have a version of that web page for people who want to run Linux? That'd be particularly helpful.

        • jmkni a day ago

          I've been experimenting with Asahi Linux recently on a spare M2 Air I have lying around, honestly very impressed. It's come on a lot since I last tried it a year or so ago

        • zillow a day ago

          its x86, they all run linux. x86 (as in amd64) is standardized

          • mnw21cam a day ago

            There certainly have been issues with drivers. It'd be nice to know in advance if that's the case with any particular system.

    • chrismorgan 2 days ago

      > I don't think you could built a fan-less x86 laptop.

      Sure you can, they’re readily available on the market, though not especially common.

      But even performance laptops can often be run without spinning their fans up at all. Right now, the ambient temperature where I live is around 28°, and my four-year-old Ryzen 5800HS laptop hasn’t used its fan all day, though for a lot of that time it will have been helped by a ceiling fan. But even away from a fan for the last half hour, it sits in my lap only warm, not hot. It’s easy enough to give it a load it’ll need to spin the fan up for, but you can also limit it so it will never need its fan. (In summer when the ambient temperature is 10°C higher every day, you’ll want to use its fan even when idling, and it’ll be hard to convince it not to spin them up.)

      x86-64 devices that are don’t even have fans won’t ever have such powerful CPUs, and historically have always been very underpowered. Like only 60% of my 5800HS’s single-threaded benchmarking and only 20% of its multithreaded. But at under 20% of the peak power consumption.

    • mnw21cam 2 days ago

      Sure, I have one sitting on my desk right now. It uses an Intel Core m3, and it's 7.5 years old, so it can't exactly be described as high performance, but it has a fantastic 3200x1800 screen and 8GB of RAM, and since I do all my number-crunching on remote servers it has been absolutely perfect. Unfortunately, the 7.5-year-old battery no longer lasts the whole day (it'll do more like 2 hours, or 1 hour running Zoom/Teams). It has a nice rigid all-metal construction and no fan. I'm looking around for a replacement but not finding much that makes sense.

    • mmcnl a day ago

      It can consume almost 20W sustained, which is quite a lot. Competitors will definitely have fans roaring at this power draw. I think the all metal design makes a huge difference from a cooling perspective. The entire case is basically a heatsink.

    • mschuster91 2 days ago

      You can, the thing is you have to build it out of a solid piece of metal. Either that's patented by Apple or it is too expensive for x86 system builders.

      • qcnguy 2 days ago

        If I recall correctly Apple had to buy enormous numbers of CNC machines in order to build laptops that way. It was considered insane by the industry at the time.

        • sys_64738 2 days ago

          Now it makes complete sense. Sort of like how crowbarring a computer into a laptop form factor was considered insane back in the early 90s.

        • mschuster91 2 days ago

          Yup. The original article is gone, however there is the key excerpt in an old HN thread: https://news.ycombinator.com/item?id=24532257

          Apple, unlike a lot, if not all large companies (who are run by MBA beancounter morons), holds insanely large amounts of cash. That is how they can go and buy up entire markets of vendors - CNC mills, TSMC's entire production capacity for a year or two, specialized drills, god knows what else.

          They effectively price out all potential competitors at once for years at a time. Even if Microsoft or Samsung would want to compete with Apple and make their own full aluminium cases, LED microdots or whatever - they could not because Apple bought exclusivity rights to the machines necessary.

          Of course, there's nothing stopping Microsoft or Samsung to do the same in theory... the problem these companies have is that building the war chest necessary would drag down their stonk price way too much.

          • lotsofpulp a day ago

            Some of the other big tech companies have or are able to have just as much, if not more cash, than Apple:

            https://www.capitaladvisors.com/research/war-chest-exploring...

            They just don’t want to bet they can deploy it successfully in the hardware market to compete with Apple, so they focus on other things (cloud services, ads, media, etc).

            • mschuster91 a day ago

              Google is not a hardware company (outside of the Pixel lineup where they just take some white-label ODM design).

              Microsoft has a bit more hardware sales exposure from its consoles, but not for PCs. They don't have a need for revolutionary "it looks cool" stuff that Apple has.

              Amazon, same thing. They brand their own products as the cheap baseline, again no need.

              And Meta, all they do is VR stuff. And they did invest(ed?) tons of money into that.

              • lotsofpulp a day ago

                The point is they have enough cash to make an attempt to be whatever company they want. Apple chose to delve into hardware, the others chose not to, not because they don’t have the cash.

ChuckMcM a day ago

There are really a lot of responses to this which explain it well. The summary though might be phrased as 'alignment'. Specifically, when everyone from the mainboard engineer to the product marketeer for the product have the same goals and priorities (are aligned) the overall system reflects that. In x86 land the processor guys are always trying to 'capture more addressable market' which means features for specific things which perhaps have no value to your 'laptop' but are great for cars embedding the chip. Similarly for display manufacturers who want standards that work for everyone even if they aren't precisely what everyone wants. Need a special 'sleep the pixels that are turned off' mode for your screen ASIC which isn't part of the HDMI spec? Nah we're not gonna do that because who would use it? But Apple can, specific things in the screen that minimize power that the OS can talk to through 'side channels' that aren't part of any standard? Sure they can do that too. And if everyone is aligned on long battery life (for example) that happens. I worked at both Google and Netapp and both of them bought enough hard drives that they could demand and get specific drive firmware that did things to make their systems run better. Their software knew about the specific firmware and exploited it. They 'aligned' their vendors with their system objectives which they could do because of their volume purchases.

In the x86 laptop space the 'big' vendors like Dell, HP, Asus, Lenovo, Etc. Can do that sort of thing. Framework doesn't have the leverage yet. Linux is an issue too because that community isn't aligned either.

Alignment is facilitated by mutual self interest, vendors align because they want your business, etc. The x86 laptop industry has a very wide set of customer requirements, which is also challenging (need lots of different kinds of laptops for different needs).

The experience is especially acute when one's requirements for a piece of equipment have strayed from the 'mass market' needs so the products offered are less and less aligned with your needs. I feel this acutely as laptops move from being a programming tool to being an applications product delivery tool.

blacksmith_tb 2 days ago

I considered getting a personal MBP (I have an M3 from work), but picked up a Framework 13 with the AMD 7 7840U. I have Pop!_OS on it, and while it isn't quite as impressive as the MBP, it is radically better than other Windows / Linux laptops I have used lately, battery life is quite good, ~5hr or so, not quite on par with the MBP but still good enough that I don't really have any complaints (and being able to up upgrade RAM / SSD / even mobo is worth some tradeoff to me, where my employers will just throw my MBP away in a few years).

  • spankibalt 2 days ago

    > "[...] battery life is quite good, ~5hr or so [...]"

    You call five hours good?! Damn... For productivity use, I'd never buy anything below shift-endurance (eight hours or more).

    • mrheosuper 2 days ago

      Depends on what you do at work, 5 hours of continuous editing video is pretty good.

    • weighterjsk a day ago

      Highly dependent on workload, using my older work laptop with 100Wh battery it lasted maybe ~40min if you put some real work on it. Browsing the web or managing tickets on Jira is completely different

  • justinram11 2 days ago

    Curious if the suspend / hibernate "just works" when you close the lid?

    I feel like I've tried several times to get this working in both Linux and Windows on various laptops and have never actually found a reliable solution (often resulting in having a hot and dead laptop in my backpack).

    • l72 2 days ago

      I have an intel framework running fedora. I have found that intels s0 sleep just uses way too much battery. I’d expect that in sleep mode, it should last a week and still be above 50% power but that is definitely not the case.

      I ended up moving to hybrid, where it suspends for an hour allowing immediate wake up then hibernates completely. It’s a decent compromise and I’ve never once had an issue with resume from suspend or hibernate, nor have I ever had an issue with it randomly waking up and frying itself in a backpack or unexpectedly having a dead battery.

      My work M1 is still superior in this regard but it is an acceptable compromise.

    • SequoiaHope 2 days ago

      I learned that even tho I run Ubuntu, arch wiki has good info on proper commands to run to configure this behavior on my machine.

    • blacksmith_tb 2 days ago

      It does! The only thing wasn't working out of the box, so to speak, was the fingerprint reader, I had to do a little config to get it going.

    • queenkjuul a day ago

      If it makes you feel better, my work provided MBP has picked up this habit and is dead take the time i go to wake it up

      Windows laptops are still worse, but i appreciate Apple continuing to give me reasons to hate them

  • bigmadshoe 2 days ago

    5 hours seems a lot worse than the ~10 hours I get on my M4 Air.

    • threatripper 2 days ago

      I get like 3 hours on my MBP when I use it. MacBooks have better runtime only when they are mostly idle, not when you fully load them.

      • baq 2 days ago

        Can confirm, when developing software (a big project at $JOB) getting 3h out of a M3 MBP is a good day. IDE, build, test and crowdstrike are all quite power hungry.

        • mbreese 2 days ago

          I wonder how much of that is crowdstrike. At $LASTJOB my Mac was constantly chugging due to some mandated security software. Battery life on that computer was always horrible compared to a personal MB w/o it.

          • yalok 2 days ago

            Exactly. Antiviruses are evil in this sense - crippling battery life significantly.

            Wherever possible, I send “pkill -STOP” to all those processes, and stall them and thus save battery…

            • mbreese a day ago

              The firewall on that computer killed the battery (with repeated crashing). It also refused to work with a USB Ethernet adapter so I could only use wifi. It was clearly a product meant to check a security box, written by a company that knew nothing about Macs, bought by Enterprise Windows admins. It was incredibly frustrating. (The next version of MacOS moved firewalls away from in-kernel to extensions. I like to think it was my repeated crash logs that made the difference.)

              I half wonder if that’s part of the issue with Windows PCs and their battery life. The OS requires so much extra monitoring just to protect itself that it ends up affecting performance and battery life significantly. It wouldn’t be surprising to me if this alone was the major performance boost Macs have over Windows laptops.

        • musicale 2 days ago

          > crowdstrike

          It is incredible that crowdstrike is still operating as a business.

          It is also hard to understand why companies continue to deploy shoddy, malware-like "security" software that decreases reliability while increasing the attack surface.

          Basically you need another laptop just to run the "security" software.

          • baq 2 days ago

            Allegedly, crowdstrike is S-tier EDR. Can’t blame security folks to want to have it. The performance and battery tax is very real though.

            • swiftcoder 2 days ago

              Ever since Crowdstrike fucked up and took out $10 billion worth of Windows PCs with a bad patch, most of the security folks I know have come around to the view that it is an overall liability. Something lighter-touch carries less risk, even if it isn't quite as effective.

          • zillow a day ago

            there's a few different reasons: - its pushed by gov (it gives full access to machines, huge backdoor) - its not actually the worst of its kind, sadly - their threat database is good (ie it will catch stuff) - it lets you look at everything on the machine (not the only one, but, its def. useful) - its big - cant be faulted for "we had it and we got pwned" - yep, sad as well

            If operating systems weren't as poop as they are today, this would not be necessary - but here we are. And I bet you major OS manufacturers will not really fix their OSes without ensuring its just a fully walled garden (terrible for devs.. but you'll probably just run a linux vm for dev on top..). Bad intents lead to bad software.

      • koiueo 2 days ago

        I concur.

        The only portable M device I heavily used on the go was my iPad Pro.

        That thing could survive for over a week if not or lightly used. But as soon as you open Lightroom to process photos, the battery would melt away in an hour or two.

    • Delk a day ago

      I get 8 to 10 hours of light use on my personal ThinkPad. Or ~6 h of Netflix at 50% screen brightness, despite the lack of hardware decoding for DRM encrypted video on Linux. All of these are with a max charge threshold of 80%. 5 hours of battery life sounds rather limited to me, too.

      But then the numbers are hardly comparable without having comparable workloads. If I were regularly running builds or had some other moderate load throughout a working day, that'd probably cost a couple of hours.

    • bigstrat2003 2 days ago

      At a certain point it's not like it matters. If you're working for 5 hours, let alone 10, you will almost certainly be able to plug in during that time.

      • ManBeardPc 2 days ago

        It’s true for me. I need a portable workstation more than a mobile laptop, as long as it survives train travels (most have power outlets now), moving between buildings/rooms or the occasional meeting with a customer +presentation it is enough for me.

        But I can imagine some people have different needs and may not have access to (enough) power outlets. Some meeting/conference rooms had only a handful outlets for dozens of people. Definitely nice to survive light office work for a full working day.

  • himeexcelanta 2 days ago

    I’m sure it’s great.

    As a layman there’s no way I’m running something called “Pop!_OS” versus Mac OS.

    • DANmode 2 days ago

      How'd you get here - "as a layman"?

    • bigyabai 2 days ago

      You're missing out. I've daily-driven both, modern macOS feels like a Fischer Price operating system by comparison.

    • blacksmith_tb 2 days ago

      Meh, it's kind of a silly name, sure, but it's one of the few distros backed by an actual vendor (System76) who isn't just trying to sucker you into buying something. As a result it has a nice level of polish and function.

      I like macOS fine, I have been using Macs since 1984 (though things like SIP grate).

      • cyberpunk a day ago

        Why does SIP grate? For my work machines I really want features like SIP to prevent fuckups and malware (especially considering how much code a random rust or node application pulls in).

        For tinkering machines and servers and stuff of course that’s a different story..

        • badc0ffee 3 hours ago

          You can always disable it on your tinkering machines.

noelwelsh 2 days ago

Like a few other comments have mentioned, AMD's Strix Halo / AI Max 380 and above is the chip family that is closest to what Apple has done with the M series. It has integrated memory and decent GPU. A few iterations of this should be comparable to the M series (and should make local LLMs very feasible, if that is your jam.)

  • aurareturn 2 days ago

    On Cinebench 2025 single threaded, M4 is roughly 4x more efficient and 50% faster than Strix Halo. These numbers can be verified by googling Notebookcheck.

    How many iterations to match Apple?

    • kangs 2 days ago

      yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)

      In day to day usage the strix halo is significantly faster, and especially when large context LLM and games are used - but also typical stuff like Lightroom (gpu heavy) etc.

      on the flip side the m4 battery life is significantly longer (but also the mpb is approx 1/4 heavier)

      for what its worth i also have a t14 with a snapdragon X elite and while its battery is closer to a mbp, its just kinda slow and clunky.

      so my best machine right now is the x86 actually!

      • aurareturn 2 days ago

          yes and no. i have macbook pro m4 and a zbook g1a (ai max 395+ ie strix halo)
        
        You're comparing the base M4 to a full fat Strix Halo that costs nearly $4,000. You can buy the base M4 chip in a Mac Mini for $500 on sale. A better comparison would be the M4 Max at that price.

        Here's a comparison I did between Strix Halo, M4 Pro, M4 Max: https://imgur.com/a/yvpEpKF

        As you can see, Strix Halo is behind M4 Pro in performance and severely behind in efficiency. In ST, M4 Pro is 3.6x more efficient and 50% faster. It's not even close to the M4 Max.

          (but also the mpb is approx 1/4 heavier)
        
        Because it uses a metal enclosure.
        • KingOfCoders a day ago

          Someone has these two machines, and claims the x86 feels faster in his work.

          You don't own any of the machines but have "made" a comparison by copying data from the internet I assume.

          This is like explaining to someone who eats a sweet apple that the internet says the apple isn't sweet.

          MacBook Pro, 2TB, 32gb, 3200 EUR

          HP G1a, 2TB, 128gb, 3700 EUR

          If we don't compare laptops but mini-PCs,

          Evo X2, 2TB, 128gb, 2000 EUR,

          Mac Mini, 2TB, 32gb, 2200 EUR

          • dagmx a day ago

            Their point is that they’re comparing between SoCs that aren’t in the same class, not that it’s not fast.

            They’re not arguing against their subjective experience using it, they’re arguing against the comparison point as an objective metric.

            If you’re picking analogies, it’s like saying Audis are faster than Mercedes but comparing an R8 against an A class.

            • KingOfCoders a day ago

              1. Everyone is different, I don't care if a computer is worse on paper if it's better in real

              2. I'd say apples and oranges is subjective and depends on what is important to you. If you're interested in Vitamin C, apples to oranges is a valid comparison. My interest in comparing this is for running local coding LLMs - and it is difficult to get great results on 24/32gb of Nvidia VRAM (but by far the fastest option/$ if your model fits into a 5090). For models to work with you often need 128gb of RAM, therefor I'd compare a Mac Studio 128gb (cheapest option from Apple for a 128gb RAM machine) with a 395+ (cheapest (only?) option for x86/Linux). So what is apples to oranges to you, makes sense to many other people.

              3. Why would you think a 395+ and an M4 Pro are in "a different class"?

              • dagmx a day ago

                Let me start with your last point because it’s where you’ve misread the original comment and why none of your following arguments seem to make sense to onlookers.

                They have a MacBook Pro with an M4, not an M4 Pro. That is a wildly different class of SoC from the 395. Unless the 395 is also capable of running in fanless devices too without issue.

                For your first point, yes it does matter if the discussion is about objectively trying to understand why things are faster or not. Subjective opinions are fine, but they belong elsewhere. My grandma finds her Intel celeron fast enough for her work, I’m not getting into an argument with her over whether an i9 is faster for the same reason.

                Your second point is equally as subjective, and out of place in a discussion about objectively trying to understand what makes the performance difference.

          • aurareturn a day ago

              You don't own any of the machines but have "made" a comparison by copying data from the internet I assume.
            
              This is like explaining to someone who eats a sweet apple that the internet says the apple isn't sweet.
            
            Yea, I never said he is wrong in his own experience. I was pointing out that the comparison is made between a base M4 and maxed out Ryzen. If we want to compare products in the same class, then use M4 Max.

              MacBook Pro, 2TB, 32gb, 3200 EUR
            
            A little disingenuous to max out on the SSD to make the Apple product look worse. SSD prices are bad value on Apple products. No one is denying that.
            • KingOfCoders a day ago

              I didn't "max out" the SSD, I chose an SSD to match the machine of the user.

              You: "You're comparing the base M4 to a full fat Strix Halo that costs nearly $4,000."

              Then

              You: "A little disingenuous to max out on the SSD to make the Apple product look worse."

              • aurareturn a day ago

                  I didn't "max out" the SSD, I chose an SSD to match the machine of the user.
                
                Why don't you try to match in CPU speed, GPU speed, NPU speed, noise, battery life, etc? Why match SSD only?

                That's why your post was disingenuous.

                If it helps you focus on what the actual discussion, we are comparing maximum CPU and GPU speeds for the dollar. That's it.

                • KingOfCoders a day ago

                  Evo X2, 128gb, 2000 EUR

                  Max Studio, 128gb, 4400 EUR

                  • aurareturn a day ago

                    Great. Here's what you're getting between an M4 Max vs an AMD AI 395+: https://imgur.com/a/yvpEpKF

                    And of course, the Mac Studio itself is a much more capable box with things like Thunderbolt5, more ports, quieter, etc.

                    I can see why some people would choose the AMD solution. It runs x86, works well with Linux, can play DirectX games natively, and is much cheaper.

                    Meanwhile, the M4 Max performs significantly better, more efficient, likely much more quiet, runs macOS, more ports, better build quality, Apple backing and support.

      • nicolaslem 2 days ago

        I also have a Strix Halo zbook G1A and I am quite disappointed in the idle power consumption as it hovers around 8W.

        Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for me.

        • yaro330 a day ago

          > Adding to that, it is very picky about which power brick it accepts (not every 140W PD compliant works) and the one that comes with the laptop is bulky and heavy. I am used to plugging my laptop into whatever USB-C PD adapter is around, down to 20W phone chargers. Having the zbook refuse to charge on them is a big downgrade for

          It's Dell, they are probably not actually using PD3.1 to achieve the 140w mark, instead they are prolly using PD3.0 extension and shove 20v7a into the laptop. I can't find any info, but you can check on the charger.

          If it lists 28V then it's 3.1, else 3.0. If it's 3.1 you can get a Baseus PowerMega 140W PD3.1, seems like a reeeeally solid charger from my limited use.

          • nicolaslem a day ago

            It is HP, and the output of the provided adapter does 28V 5A, so in spec.

            With some of the other 28V 5A adapters I have, it charges until triggering a compute heavy task and then stops. I have seen reports online of people seeing this behavior with the official adapter. My theory is that the laptop itself does not accept any ripple at all.

            • yaro330 a day ago

              Ah my bad. Are you sure your cable can do 140w? That was the source of most of my pains trying to push 100w to my work laptop. Baseus and Anker have some good PD3.1 chipped cables that worked for me. What kind of charger are you using?

        • omnimus 2 days ago

          I am also on search of good portable brick to replace 140w. I found 100w Anker Prime was working well. And surprisingly there is almost identical 3 port Baseus 100w GaN but half the price. For some reason it is hard to come by (they have few other 100w bricks that are not so portable) i think it might be discontinued.

    • noelwelsh 2 days ago

      > How many iterations to match Apple?

      Why are you asking me? I'm not in charge of AMD.

      Yes the Strix Halo is not as fast on the benchmarks as the M4 Max, its bandwidth is lower, and the max config has less memory. However, it is available in a lot of different configurations and some are much cheaper than comparable M4 systems (e.g. the maxed out Framework desktop is $2000.) It's a tradeoff, as everything in life is. No need to act like such an Apple fanboi.

      • aurareturn 2 days ago

          Why are you asking me? I'm not in charge of AMD.
        
        Because you claimed this so I thought you knew:

          A few iterations of this should be comparable to the M series
    • alt227 a day ago

      The important part of this is 'Single threaded'. Whereas if you are actually using Cinebench to do real rendering you would always want multi core performance. Which pretty much makes Apples single Core benchmark results pointless.

    • KingOfCoders a day ago

      On one of the few workloads where massive parallelism makes sense, why quote a single threaded number? I'm curious.

      • aurareturn a day ago

        To show in real numbers why people always say a Macbook always feels miles ahead of AMD and Intel in actual real world experience.

        The primary reason is the ST speed (snappy feeling) and the efficiency (no noise, cool, long battery life).

        It just so happens that Cinebench 2025 is the only power measurement metric I have available via Notebookcheck. If Notebookcheck did power measurements for GB6, I'd rather use that as it's a better CPU benchmark overall.

        Cinebench 2025 is a decent benchmark but not perfect. It does a good enough job of demonstrating why the experience of using Apple Silicon is so much better. If we truly want to measure the CPU architecture like a professional, we would use SPEC and the measure power from the wall.

    • FirmwareBurner 2 days ago

      >How many iterations to match Apple?

      Until AMD can built a tailor made OS for their chips and build their own laptops.

      • aurareturn 2 days ago

        Here's an M4 Max running macOS running Parallels running Windows compared to AMD's very best laptop chip:

        https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

        M4 Max is still faster. Note that the M4 Max is only given 14 out of 16 cores, likely reserving 2 of them for macOS.

        How do you explain this when Windows has zero Apple Silicon optimizations?

        • FirmwareBurner 2 days ago

          Maybe Geek bench is not a good benchmark?

          • aurareturn 2 days ago

            Maybe it is? Cinebench favors Apple even more.

            GB correlates highly with SPEC. AMD also uses GB in their official marketing slides.

          • TiredOfLife 2 days ago

            Geekbench is the closest thing to a good benchmark that's usable across generations and architectures.

  • whynotminot a day ago

    > A few iterations of this should be comparable to the M series

    This assumes Apple's M series performance is a static target. It is not. Apple is iterating too.

  • TiredOfLife 2 days ago

    > It has integrated memory And has had for many years. Even Apple had that with Apple II

    • noelwelsh 2 days ago

      What point are you trying to make? Strix Halo was released this year. How is the architecture of the Apple II relevant?

KingOfCoders a day ago

1. Memory soldered to the CPU

2. Much more cache

3. No legacy code

4. High frequencies (to be 1st in game benchmarks, see what happens when you're a little behind like the last Intel launch, the perception is Intel has bad CPUs because they are some percentage points behind AMD on games, pressure Apple doesn't have - comparisons are mostly Apple vs. Apple and Intel vs. Amd)

The engineers at AMD are the same as at Apple, but both markets demand different chips and they get different chips.

Since some time now the market is talking about energy efficiency, and we see

1. AMD soldering memory close to the CPU

2. Intel and AMD adding more cache

3. Talks about removing legacy instructions and bit widths

4. Lower out of the box frequencies

Will take more market pressure and more time though.

simne 3 hours ago

Read history of Bugatti Veyron for answer. In short, VW have made extraordinary machine, but so expensive, they fear to sell it for real cost.

So literally, VW partially donate Veyron to their clients, selling it under-priced.

I think, same happen with Apple M architecture - it is extraordinary and different from anything on market, but Apple sell it under-priced, so to limit losses, they decided to limit it to very few models.

How such things happen? Well, hardware is hard - usually so sophisticated SoC need 7..8 iterations to achieve production, and this could cost million or even more. And mostly happen problem, just low output, mean, for example you make 100 cores on one die, but only 5..6 working.

How AMD/Intel deal with such things? It's hard, mean complex.

First, they just have huge experience and very wide portfolio of different SoCs, but used some tricks, so could for example downgrade Xeon to Core-i7 with jumpers.

Second, for large patterns like RAM/Cache, could disable broken parts of die with jumpers, or even could disable cores. That's why there are so many DRAM PCB designs - they usually made as 6 RAM fields with one controller, and with jumpers could sell chips with literally 1, 2, 3,4,5 or 6 fields enabled; some AMD SoCs exists with odd number of cores because of this (for example 3-cores), and other tricks, which could made some averaged profits from wide line of SoCs.

Third, for some designs, Intel/AMD use already proven technologies, like Atom was basically just first Pentium on new semiconductor process, or for long time, I7 series was basically Xeons previous generations.

Unfortunately for Apple, they have not such luxury to make wide product line, and don't have significant place to dump low grade chips, so they limited M line to one which as I think just appear to have largest output.

From my experience, I could speculate, Apple tops consider to make wider product line, when achieve better output, but for now without much success.

trashface 2 days ago

I may be out of date or wrong, but I recall when the M1 came out there was some claims that x86 could never catch up, because there is an instruction decoding bottleneck (instructions are all variable size), which the M1 does not have, or can do in parallel. Because of that bottleneck x86 needs to use other tricks to get speed and those run hot.

  • Remnant44 2 days ago

    ARM instructions are fixed size, while x86 are variable. This makes a wide decoder fairly trivial for ARM, while it is complex and difficult for x86.

    However, this doesn't really hold up as the cause for the difference. The Zen4/5 chips, for example, source the vast majority of their instructions out of their uOp trace cache, where the instructions have already been decoded. This also saves power - even on ARM, decoders take power.

    People have been trying to figure out the "secret sauce" since the M chips have been introduced. In my opinion, it's a combination of:

    1) The apple engineers did a superb job creating a well balanced architecture

    2) Being close to their memory subsystem with lots of bandwidth and deep buffers so they can use it is great. For example, my old M2 Pro macbook has more than twice the memory bandwidth than the current best desktop CPU, the zen5 9950x. That's absurd, but here we are...

    3) AMD and Intel heavily bias on the costly side of the watts vs performance curve. Even the compact zen cores are optimized more for area than wattage. I'm curious what a true low power zen core (akin to the apple e cores) would do.

    • mycall 2 days ago

      When limited to 5 watts, the Ryzen HX 370 works pretty darn well. In some low-power user cases, my GPD Pocket 4 is more power efficient than my M3 MBA.

      • aurareturn 2 days ago

        We are going to need to see some numbers for your claim. That’s not believable.

        • ZiiS 2 days ago

          A 8.8" screen takes a lot less power.

          • aurareturn 2 days ago

            When you say efficiency, I assume you’re factoring in performance of the device as well?

            Maybe run Geekbench 6 and see.

            • ZiiS a day ago

              I am not the original commenter; but they said "low-power user cases" i.e. very much not when running Geekbench; rather when it is near idle.

              • aurareturn a day ago

                FYI, AMD chips are notoriously bad at idle.

      • happymellon 2 days ago

        We will need some citations on that as the GPD Pocket 4 isn't even the most power efficient pocket pc.

        Closest I've seen is an uncited Reddit thread talking about usb c charging draw when running a task, conflating it with power usage.

      • mmcnl 2 days ago

        How about single-core performance?

    • ozgrakkurt 2 days ago

      But is the uOp trace cache free? It surely doesn’t magically decode and put stuff in there without cost

      • Remnant44 2 days ago

        For sure.. for what it's worth though, I have run across several references to arm also implementing uop caches as a power optimization versus just running the decoders, so I'm inclined to say that whatever it's cost it pays for itself. I am not a chip designer though!

        • hajile a day ago

          Apple never used a uop cache in their designs. ARM dropped uop caches when they removed 32-bit support. Qualcomm also skipped uop cache.

          uop made sense with 32-bit support because the 32-bit ISA was so complex (though still simple compared to x86). Once they went to a simplified instruction design, the cost to decode every single time was lower than the cost of maintaining the uop cache.

    • saati 2 days ago

      Zens don't have a trace cache, just an uop cache.

  • astrange 2 days ago

    They can always catch up, it may just take a while. x86's variable size instructions have performance advantages because they fit in cache better.

    ARM has better /security/ though - not only does it have more modern features but eg variable length instructions also mean you can reinterpret them by jumping into the middle of one.

  • scarface_74 2 days ago

    No one ever said that. The M1 was not the fastest laptop when it was introduced. It was a nice balance of speed/battery life/heat

connorbrinton a day ago

On my Framework (16), I've found that switching to GNOME's "Power Saver" mode strikes the right balance between thermals, battery usage and performance. I would recommend trying it. If you're not using GNOME, manually modifying `amd_pstate` and `amd_pstate_epp` (either via kernel boot parameters or runtime sysfs parameters) might help out.

I agree that it's unfortunate that the power usage isn't better tuned out of the box. An especially annoying aspect of GNOME's "Power Saver" mode is that it disables automatic software updates, so you can't have both automatic updates and efficient power usage at the same time (AFAIK)

hnaccountme 2 days ago

Apple tailors their software to run optimally on their hardware. Other OSs have to work on a variety of platforms. Therefore limiting the amount of hardware specific optimizations.

  • pjerem 2 days ago

    Well I don’t think so.

    First, op is talking about Chrome which is not an Apple software. And I can testify that I observed the same behavior with other software which are really not optimized for macOS or even at all. Jetbrains IDEs are fast on M*.

    Also, processor manufacturers are contributors of the Linux kernel and have economical interest in having Linux behave as fast as they can on their platforms if they want to sell them to datacenters.

    I think it’s something else. Probably unified the memory ?

    • yalok 2 days ago

      Chrome uses tons of APIs from MacOS, and all that code is very well optimized by Apple.

      I remember disassembling Apple’s memcpy function on ARM64 and being amazed at how much customization they did just for that little function to be as efficient as possible for each length of a (small) memory buffer.

      • pm215 2 days ago

        memcpy (and the other string routines) are some of the library functions that most benefit from heavy optimisation and tuning for specific CPUs -- they get hit a lot, and careful adjustment of the code can get major performance wins by ensuring that the full memory bandwidth of the CPU is being used (which may involve using specific load instructions, deciding whether using the simd registers is better or not, and so on). So everybody who cares about performance optimises these routines pretty carefully, regardless of toolchain/OS. For instance the glibc versions are here:

        https://github.com/bminor/glibc/tree/master/sysdeps/aarch64/...

        and there are five versions specialised for either specific CPU models or for available architecture features.

  • dagmx 2 days ago

    This argument never passes the sniff test.

    You can run Linux on a MacBook Pro and get similar power efficiency.

    Or run third party apps on macOS and similarly get good efficiency.

    • kangs 2 days ago

      unfortunately, contrarily to popular belief, you cannot run Linux natively on recent macbooks (m4) today.

      • dagmx a day ago

        That doesn’t really affect what I’m saying though. Yes, support capped out with the M2, but you can still observe the properties of efficiency on there.

      • sys_64738 2 days ago

        Depends what "natively" means. You can virtualize Linux through several means such as Virtual Box.

        • svantana a day ago

          ...but you won't get similar power efficiency, which was claimed.

    • danieldk 2 days ago

      You can run Linux on a MacBook Pro and get similar power efficiency.

      What? No. Asahi is spectacular for what it accomplished, but battery life is still far worse than macOS.

      I am not saying that it is only software. It's everything from hardware to a gazillion optimizations in macOS.

      • dagmx 2 days ago

        It’s worse at switching power states, but at a given power state it is within the ball park of macOS power use.

        The things where it lags are anything that use hardware acceleration or proper lowering to the lower power states.

  • aurareturn 2 days ago

    The fastest and most efficiency Windows laptop in the world is an M4 MacBook running Parallels.

  • mmcnl 2 days ago

    No, it's not, it's absolutely the hardware. The vertical integration surely doesn't hurt, but third-party software runs very fast and efficient on M-series too, including Asahi Linux.

    • dancek 2 days ago

      Does Asahi Linux now run efficiently? I tried it on M1 about two years ago. Battery life was maybe 30% of what you get on macOS.

hedora a day ago

It sounds like something is horribly misconfigured.

- Try running powertop to see if it says what the issue is.

- Switch to firefox to rule out chrome misconfigurations.

- If this is wayland, try x11

I have an amd SOC desktop and it doesn’t spin up the fans or get warm unless its running a recent AAA title or an LLM. (I’m running devuan because most other distros I’ve tried aren’t stable enough these days).

In scatterplots of performance vs wattage, AMD and Apple silicon are on the same curve. Apple owns the low end and AMD owns the high end. There’s plenty of overlap in the middle.

lvl155 a day ago

AMD needs to put out a reference motherboard to pair with their chips. They’re basically relying on third-party “manufacturers” to put up R&D. We have decades of these mobo manufacturers doing bare min churning out crappy quality mobos. No one’s interested in overclocking in 2025. Why am I paying $300 premium for a feature I don’t care about?

jryan49 a day ago

What is the power profile setting? Is it on balanced or performance? Install powertop and see what is up. What distro are you using? The linux drivers for the new AMD chips might stink cause the chips are so new. Linux drivers for laptops stink in general compared to Windows. I know my 11th gen WiFi still doesn't work right, even with latest kernel and disabling powersaving on the wifi.

jerome-jh 16 hours ago

This is a modularity vs integration tradeoff.

- Linux is a patchwork of SW, written in a variety of programming languages, using a variety of libraries, some of which having the same functionality. There is duplication, misalignment, legacy.

- MacOS is developed by a single company. It is much more integrated and coherent.

Same for the CPU:

- x86 accesses memory through an external bus. The ability to install a third party GPU requires an external bus, with a standardized protocol, bus width, etc. This is bound to lag behind state of the art

- Apple chips have on die memory, GPU (actually same package but not same die). Higher speeds, optimization, evading from standardized protocols: all this is possible.

This has an impact on kernel/drivers/compilers:

- x86: so many platforms, CPU versions, protocol revisions to support. Often with limited documentation. This wastes hell a lot of engineering time!

- Apple: limited number of HW platforms to support, full access to internals.

phendrenad2 2 days ago

No incentive. x86 users come to the table with a heatsink in one hand and a fan in the other, ready to consume some watts.

kldg a day ago

For what it's worth -- and I'm not familiar with the Framework 13 -- but I did recently review a marketed-for-AI-workloads laptop with Ryzen 260 CPU and Nvidia 5060 laptop GPU, which shipped with Windows, and was curious how graphical Ubuntu with GNOME would run from a fresh install on it. It ran hot on simple tasks, with severely worse battery performance (from 11h runtime playing a local video stream via Firefox to 3.5h) and moderately worse total work output relative to Windows.

It runs Debian headless now (I didn't have particular use for a laptop in the first place). Not sure just how unpopular this suggestion'd be, but I'd try booting Windows on the laptop to get an idea of how it's supposed to run.

GeekyBear a day ago

> When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!

One of the things Apple has done is to create a wider core that completes more instructions per clock cycle for performance while running those cores at conservative clock speeds for power efficiency.

Intel and AMD have been getting more performance by jacking up the clock speeds as high as possible. Doing so always comes at the cost of power draw and heat.

Intel's Lunar Lake has a reputation for much improved battery life, but also reduces the base clock speed to around 2 gigahertz.

The performance isn't great vs the massively overclocked versions, but at least you get decent battery life.

st3fan 10 hours ago

I love all the benchmarks and numbers and what not but for us who switched to M* laptops it is all very obvious without any of that: longer battery life, more performance for disk/cpu/gpu, no fans spinning.

My M1 Air still beats top of the line i7 MacBook Pros.

thisiswhy1 a day ago

A friend at grad school was asking me for advice -- he had an "in" at Intel -- make $250k and do nothing. A friend had promised a basically no-show position for him. My friend was debating between this $250k/yr no-show position at Intel (no growth) vs something elsewhere which was more demanding but would provide more growth.

This isnt the only no-show position I've heard about at Intel. That is why Intel cannot catch up. You probably cannot get away with that at Apple.

  • jmkni a day ago

    How could Intel possibly benefit from that?

    • woah a day ago

      Could be some manager who needs to show a certain headcount to maintain their status, but involving more people in the actual work would just be too many cooks in the kitchen. The friend probably had a degree that looked sufficiently good on paper to make the manager look good.

  • mixmastamyk a day ago

    I’ll take it, hard to find any work now… ;-)

ljosifov a day ago

Plenty of excellent comments about the companies - e.g. Apple vertical closed mobile 1st, while Microsoft horizontal open desktop 1st; decades of work by many thousands of people went into optimising many tiny advantages, aka tricks - but can't help but think back of pre-history. Where Intel was always more-is-more, while ARM was always less-is-more. Intel was winning for the longest time. Never expected to see non-x86 competitive single core integer performance tbh. And in the pre-pre-history, one generation further back, tiny 6502 1MHz and mostly totally 8 bit only, could about keep up with Z80 4MHz and his almost-aspiring-to 16 bit registers. Always made me wonder somewhat - "whut, how come??"

bjourne 21 hours ago

I have a ASUS Zenbook 14 OLED UX3405 and it's the same here - thermals are shit. I can't watch a yt video without the fan spinning up. I can in mplayer though, so there must be something in the desktop tech stack that prevents the cores from sleeping. Maybe it's Wayland which I've noticed is sluggy sometimes or maybe Linux tcp/ip handling of video streams isn't optimized for energy-efficiency. The stack is so deep that finding the culprit is probably impossible.

76SlashDolphin a day ago

There's a lot of trash talking of x86 here but I feel like it's not x86 or Intel/AMD that are the problem for the simple reason that Chromebooks exist. If you've ever used a Chromebook with the Linux VM turned on, they can basically run everything you can run in Linux, don't get hot unless you actually run something demanding, have very good idle power usage, and actually sleep properly. All this while running on the same i5 that would overheat and fail to sleep in Windows / default Linux distros. This means that it is very much possible to have an x86 get similar runtimes and heat output as an M Series Mac, you just need two things:

- A properly written firmware. All Chromebooks are required to use Coreboot and have very strict requirements on the quality of the implementation set by Google. Windows laptops don't have that and very often have very annoying firmware problems, even in the best cases like Thinkpads and Frameworks. Even on samples from those good brands, just the s0ix self-tester has personally given me glaring failures in basic firmware capabilities.

- A properly tuned kernel and OS. ChromeOS is Gentoo under the hood and every core service is afaik recompiled for the CPU architecture with as many optimisations enabled. I'm pretty sure that the kernel is also tweaked for battery life and desktop usage. Default installations of popular distros will struggle to support this because they come pre-compiled and they need to support devices other than ultrabooks.

Unfortunately, it seems like Google is abandoning the project altogether, seeing as they're dropping Steam support and merging ChromeOS into Android. I wish they'd instead make another Pixelbook, work with Adobe and other professional software companies to make their software compatible with Proton + Wine, and we'd have a real competitor to the M1 Macbook Air, which nothing outside of Apple can match still.

GuB-42 a day ago

Plenty of reasons, but the big one would be integration, especially RAM. Apple M series processors are exclusively designed for Apple products running the Apple OS, none of them extensible. It means it can be optimized for that use case.

RAM in particular can be a big performance bottleneck, Apple M as way better bandwidth than most x86 CPUs, having well specified RAM chips soldered right next to the CPU instead of having to support DIMM modules certainty helps. AMD AI MAX chips, which also have great memory bandwidth and the most comparable to Apple M also use soldered RAM.

Maybe some details like ARM having a more efficient instruction decoder plays a part, but I don't believe it is that significant.

Aissen 2 days ago

You can probably install Asahi Linux on that M1 pro and do comparative benchmarks. Does it still feel different? (serious question)

tpoindex a day ago

A better question is which (if any) ARM competitors can achieve comparable performance to M-series? I do understand Apple has tuned the entire platform from cpu/gpu, cache, unified memory, and software to achieve what they offer.

  • jasona123 a day ago

    I think the challenge is going to be software, software tuning, and (until everyone builds for both ARM and x64) - translation/emulation. I’ll admit that I haven’t had much experience on the Windows side but I made the leap pretty quickly from the early 2015 MBP to an M1 MBA (like maybe a month after the M1 Macs came out) and it very much was seamless, whereas it still sounds like on the Windows on ARM side it’s been languishing even to this day.

purpleidea 2 days ago

Honestly, I have serious FOMO about this. I am never going to run a Mac (or worse: Windows) I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.

My Apple friends get 12+ hrs of battery life. I really wish Lenovo+Fedora or whoever would get together and make that possible.

  • vid 10 hours ago

    Here's my variation; I can't stand the ergonomics of Apple computers. The screen doesn't tilt back far enough, they're heavy and slippery, and the only thing I could switch to from a Trackpoint is mind control.

    I'm a Linux guy too, when I have to use a Mac I turn all the gloss off and it's ok, but without going to Nix I miss a system wide package manager and I like an open-as-possible community OS that runs everywhere. It's a shame Apple doesn't license their chips.

    About a year ago I got a maxed out Macbook Pro, but the above combined with the fact I wasn't comfortable travelling with something that cost as much as a good used car made me return it.

    Now I'm using a Thinkpad that was ¼ the price and it's great, AMD chip, 64GB of RAM, replaceable storage, fantastic screen, keyboard (and Trackpoint) means it can do just about anything. Yes, battery life is limited, around four hours with the 16" OLED (I haven't put any work into optimizing it, and this isn't a battery-first model), but I can handle it. I'll maybe get a Strix Halo laptop since I like running LLMs, but otherwise x86 has improved enough that it's pretty good. That said, I won't complain if it matches/surpasses Apple chips, and I'd consider running a headless Apple 'server' at home.

  • mnw21cam 2 days ago

    I have a 7.5 year old Asus Zenbook UX305CA. It was the perfect laptop for my use case, given I run all heavy stuff on remote servers. 3200x1800 HiDPI screen, 8GB RAM, no fan, rigid aluminium construction (so it feels high quality), and it runs Linux pretty reliably. It used to get at least 6-7 hours of doing actual work, and one night I forgot to hibernate it or plug it in, and it was still running the next morning.

    Now, 7.5 years later, the battery is not so healthy any more, and I'm looking around for something similar, and finding nothing. I'm seriously considering just replacing the battery. I'll be stuck with only 8GB RAM and an ancient CPU, but it still looks like the best option.

    Another useful thing is that you can buy small portable battery packs that are meant for jump-starting car engines, and they have a 12V output (probably more like 14V), which could quite possibly be piped straight into the DC input of a laptop. My laptop asks for 19V, but it could probably cope with this.

  • prmoustache 2 days ago

    > work at a coffee shop

    That doesn't sound super secure to me.

    > for five hours.

    My experience with anything that is not designed to be an office is that it will be uncomfortable in the long run. I can't see myself working for 5 hours in that kind of place.

    Also it seems it is quite easily solved with an external battery pack. They may not last 12hours but they should last 4 to 6 hours without a charge in powersaving mode.

  • neobrain a day ago

    > I am never going to run a Mac (or worse: Windows) I'm 100% on Linux,

    I'm guessing you're well aware, but just in case you're not: Asahi Linux is working extremely well on M1/M2 devices and easily covers your "5 hours of work at a coffee shop" use case.

  • sys_64738 2 days ago

    > I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.

    Don't you drink any coffee in the coffee shop? I hope you do. But, still, being there for /five/ hours is excessive.

  • seabrookmx a day ago

    Despite OP's complaints (which are valid) I run Fedora on my Framework 13 (AMD) and I get 5 hours of work (10 ish Firefox tabs, multiple VS Code instances, terminals and Slack) without issue.

    It's not 8-12, and the fans do kick up. The track pad is fine but not as nice as the one on the MacBook. But I prefer to run Linux so the tradeoff is worth it to me.

  • zillow a day ago

    try one of the newer amd or intel (TSMC-made) CPUs. its pretty much the same. keep in mind the battery size too. mbp has a huge and very heavy battery (the mbp is super heavy)

    HP has Ubuntu-certified strix halo machines for example.

  • wutwutwat 2 days ago

    > I seriously hate it that I can't reliably work at a coffee shop for five hours

    just... take your charger...

    • merelysounds 2 days ago

      They’re relatively heavy, take up space and there’s no guarantee there will be an outlet near your table. When connected, the laptop becomes more difficult to move or pack. It’s all doable but also slightly less convenient.

FrankyHollywood 2 days ago

Backward compatibility.

Intel provides processors for many vendors and many OS. Changing to a new architecture is almost impossible to coordinate. Apple doesn't have this problem.

Actually in de 90s Intel and Microsoft wanted to move to a RISC architecture but Compaq forced them to stay on x86.

  • rollcat a day ago

    Apple: m68k -> PowerPC (32), OS 9 -> OS X, PowerPC (32, 64) -> x86 (32, 64) -> Arm. They've dragged giants like Adobe (kicking and screaming) thru most stages.

    Windows NT has always been portable, but didn't provide any serious compat with Windows 4.x until 5.0. At that time, AMD released their 64-bit extension to x86. Intel wanted to build their own, Microsoft went "haha no". By that time they've been dictating the CPU architecture.

    I guess at that point there was very little reason to switch. Intel's Core happened; Apple even went to Intel to ask for a CPU for what would become the iPhone - but Intel wasn't interested.

    Perhaps I'm oversimplifying, but I think it's complacency. Apple remained agile.

bob1029 a day ago

In the general case, it appears to be impossible to beat a hardware vendor that is also entirely in charge of the operating system and much of the software on top of that (e.g. safari).

In special cases, such as not caring about battery life, x86 can run circles around M1. If you allow the CPU rated for 400W to actually consume that amount of power, it's going to annihilate the one that sips down 35W. For many workloads it is absolutely worth it to pay for these diminishing returns.

mips_avatar 16 hours ago

Part of it is software. Mac OS is redesigned for M1 chips. They redid a huge amount of the OS down to like memory anllocators and other low level stuff.

noisy_boy a day ago

To those who are using the newer MacBook pros, how easy and seamless it is to run Linux on it via Parallels etc without going all the way to Asahi etc? Like if i'm super comfortable with Linux, can I just get near native Linux desktop experience and forget that all of it is running on top of MacOS?

  • yusefnapora a day ago

    Parallels is quite good - I can watch 4K YouTube videos at 60fps with no noticeable frame drops on an M1 Pro, and general desktop animations, etc. are fine. That said, I do occasionally get rendering glitches, usually in Firefox where a small rectangular portion of the screen will briefly flash black while scrolling quickly through a page.

    The biggest quality of life issue for me personally is the trackpad. Although support for gestures and so on has gotten quite decent in Linux land, Parallels only sends the VM scroll wheel events, so there's no way to have smooth scrolling and swipe gestures inside the VM, so it feels much worse than native macOS or Asahi Linux running on the bare metal.

  • int_19h a day ago

    It's pretty seamless, but you can't really get the macOS UI out of the picture entirely. You can run it fullscreen, sure, but even then there are still some shortcuts that are going to be handled by macOS, and also multiple displays etc.

    OTOH if you're fine with macOS GUI but you want something like WSL for CLI and server apps, there's https://lima-vm.io

  • kjkjadksj a day ago

    Why bother with that? Macos is a unix OS already.

rs186 2 days ago

> using the Framework feels like using an older Intel based Mac

Your memory served you wrong. Experience eith Intel based Macs was much worse than recent AMD chips.

  • rollcat a day ago

    Agree. My 2017 MBP cooked its own battery (spicy pillow) by 2021.

    My 2019 Thinkpad T495 (Ryzen 3600) does get hot under load, but it's still fine to type on.

    • alt227 a day ago

      Yep, but only because of Apples terrible design. Take those same chips and put them in a machine with proper cooling and they fly. Its frustrating when Apple fans always blame that situation on intel, when in reality Apple messed up the design bad. Its almost like they purposely designed the last generation of intel macs to run hot and throttle just so people had bad memories of them after upgrading to Apple silicon.

giancarlostoro a day ago

The M4 I have lets me run GPT-OSS-20B on my Mac, and its surprisingly responsive. I was able to get LM Studio to even run a web API for it, which Zed detected. I'm pleasantly surprised by how powerful it is. My gaming with with a 3080 cannot even run the same LLM model (not enough VRAM).

TomLisankie a day ago

I was in nearly the same situation as you and went with the Framework 13 as well (albeit with the AMD Ryzen 5 7640U which is an older chip). Not really regretting it though despite some quirks. Out of curiosity, how much RAM do you have in your Framework 13?

todotask2 2 days ago

x86 has long been the industry standard and can’t be remove, but Apple could move away from it because they control both hardware and software.

aorth 2 days ago

Thanks for the honest review! I have two Intel ThinkPads (2018 and 2020) and I've been eying the Framework laptops for a few years as a potential replacement. It seems they do keep getting better, but I might just wait another year. When will x86 have the "alien technology from the future" moment that M1 users had years ago already?

danb1974 2 days ago

Macbooks are more like "phone/tablet hardware evolved into desktop" mindset (low power, high performance). x86 hardware is the other way around (high power, we'll see about performance).

That being said, my M2 beats the ... out of my twice as expensive work laptop when compiling an arduino project. Literall jaw drop the first time I compiled on the M2.

dapperdrake 2 days ago

How much do you like the rest of the hardware? What price would seem OK for decent GUI software that runs for a long time on batter?

Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.

dabockster 21 hours ago

The M series chips are optimized at both the assembly language and silicon (hardware) levels for mobile use. X86 is much more generalized.

munchlax a day ago

I can build myself a new amd64 box for just under €200. Under €100 with used parts. Some older Dell and Lenovo laptops even work with coreboot.

An Airbook sets me back €1000, enough to buy a used car, and AFAICT is much more difficult to get fully working Linux on than my €200 amd64 build.

Why hasn't apple caught up?

  • atonse a day ago

    When netbooks ($400 notebooks) were all the rage, Steve Jobs was asked why Apple didn’t make one. And he said they didn’t know how to make a cheap laptop that didn’t suck.

    And he was right. Netbooks mostly sucked. Same with Chromebooks.

    There’s nothing to be gained by racing to the bottom.

    You can buy an m1 laptop for $599 at Walmart. That’s an amazing deal.

    • munchlax a day ago

          > You can buy ... for $599
      
      Not sure why you'd think any random nerd has that kind of money. And Walmart isn't exactly around the corner for most parts of the world.
      • atonse 10 hours ago

        I don't follow, is that a counter-argument to my statement? That there exist people that can't afford that, so that makes it not a good deal?

  • snovymgodym 19 hours ago

    > I can build myself a new amd64 box for just under €200.

    pcpartpicker link?

    • munchlax 2 hours ago

      Oof. From what I've glanced, a CPU there costs nearly $70 and less performant models are actually costlier. Not sure what to do about that, so here's some general advice:

      An AMD APU paired with a microATX motherboard is the cheapest way to get reasonable computing performance. The best thing I ever did was to buy only a motherboard, CPU, and RAM. For €90, I was blown away.

      See if you can salvage any parts, there's no need for a case, SSD, and PSU every time. AMD has a nice upgrade path for this. A motherboard with AM2+ socket will fit an AM3 CPU. You can then later upgrade to an AM3+ motherboard and wait for a reasonably priced AM4 CPU, and so on. My gripe with this, however, is that it used to be possible to have different (scavenged) DDR RAM parts as long as it was the same number of pins. A quick glance at the notches would tell you if it fits and that was that. Nowadays you often have to have either just one bank or a few of exactly the same part.

      Sometimes it's possible to get lucky, like buying a cheaper model CPU and succesfully unlocking an extra core. And a boxed CPU may be more expensive than a loose tray, but comes with an adequate cooler that'll fit just fine.

      Neither of us have any contact details in the profile, although I'm not sure if I can be of much help to you anyway.

      One more tip. If you see your box increasingly use swap during normal workload, adding RAM can significantly improve performance. Even if it's lower spec RAM.

  • FirmwareBurner a day ago

    >I can build myself a new amd64 box for just under €200

    Precisely because of that they haven't caught up. They don't want to compete in the PC race to them bottom that nearly bankrupted them in the 90s before they invented the iPod.

    Apple got rich by creating its own markets.

rugdobe a day ago

from pure CPU and battery life perspective, Snapdragon X Elite based Surface Laptop 7 are really quite good -- comparable to M2 Pro and M3 Pro performance and performance per watt. GPU is a bit weak.

the build quality of surface laptop is superb also.

anonzzzies a day ago

If the framework 12 could have the 395+ but I think it cannot work out vs arm? And then my m4 air is just better and cheaper. Cheaper I don't care about much but battery vs perf is quite mental.

Panzer04 2 days ago

Software.

If you actually benchmark said chips in a computational workload I'd imagine the newer chip should handily beat the old M1.

I find both windows and Linux have questionable power management by default.

On top of that, snappiness/responsiveness has very little to do with the processor and everything to do with the software sitting on top of it.

veidr a day ago

I don't give any fucks about battery life or even total power consumption cost; I just hate that I have some crap-ass Apple mid-range (for them) laptop with only 36GB RAM and an "M4 Max" CPU, and it runs rings around my 350W Core i9-14900K desktop Linux workstation, and there is essentially no way I can develop software (Rust, web apps, multi-container Docker crap) on Linux with anything close to the performance of my shitty laptop computer, even if I spend $10,000.

That's actually wild. I think we're in a kind of unique moment, but one that is good for Apple mainly, because their OS is so developer-hostile that I pay back all the performance gains with interest. T_T

  • kwimajs a day ago

    To be honest, I haven't done any research on this, but it's something that crosses my mind from time to time. My laptop has 32 GB of RAM and an i7-14700H processor, with Linux Mint installed. I'm more than happy with its performance, especially considering I bought it for a price that was very cheap for the market.

    I wonder what specs a MacBook would need to give me similar performance. For example, on Linux with 32 GB of RAM, I can sometimes have 4 or 5 instances of WebStorm open and forget about them running in the background. Could a MacBook with 16 GB of RAM handle that? Similarly, which MacBook processor would give me the real-world, daily-use performance I get from my 14700H? Should I continue using cheap and powerful Windows/Linux laptops in the future, or should I make the switch to a MacBook?

    (Translated from my native language to English using Gemini.)

    • veidr a day ago

      I don't know for sure, either, but I suspect any recent Macbook with 16GB RAM would be a significant upgrade over 14700H.

      I don't like macOS, so in recent years, I only use it on laptop (which for me is like, a few on-site meetings per year, plus a few airplane flights). What infuriates me is that my mid-tier Mac laptop for those use cases is now significantly faster than any Linux workstation I can possibly buy... and positively annihilates any non-Apple laptop machine on essentially every meaningful benchmark.

      • vlod a day ago

        I really hoped that Asahi Linux had progressed. I want to use Linux on apple hardware.

      • mixmastamyk a day ago

        There are a few new AMD rigs, for either games or unified memory applications that are competitive or narrowly beat Apple in performance (not efficiency).

donatj a day ago

> I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.

My experience has been to the contrary. Moving to Linux a couple months ago from Windows doubled my battery life and killed almost all the fan noise.

bigyabai 2 days ago

All Ryzen mobile chips (so far) use a homogeneous core layout. If heat/power consumption is your concern, AMD simply hasn't caught up to the Big.little architecture Intel and Apple use.

In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.

  • goosedragons 2 days ago

    Sort of. Technically the Ryzen 5 AI 340 has 3 Zen 5 cores and 3 Zen 5c cores. They are more similar than the power/efficiency cores of Apple/Intel but 5c cores are more power efficient.

pythonRon 2 days ago

Does the M series have a flat memory model? If so, I believe that may be the difference. I'm pretty sure the entire x86 family still pages RAM access which (at least) quadruples activity on the various busses and thus generates far more heat and uses more energy.

  • rwallace 2 days ago

    I'm not aware of any CPU invented since the late eighties that doesn't have paged virtual memory. Am I misunderstanding what you mean? Can you expand on where you are getting the 4x number from?

    • marshray 2 days ago

      I doubt any CPU has more levels of address translation, caching, and other layers of memory access indirection than AMD/Intel 64 at this point.

      • rwallace 2 days ago

        That's an interesting question about the number of levels of address translation. Does anyone have numbers for that, and how much latency and energy an extra layer costs?

Insanity a day ago

I have an iPad pro (m1) and don't feel like upgrading at all. Of course, it's an overpowered chip for a tablet - but I'm still impressed by what I can run on it (like DrawThings).

deezmofonutz a day ago

I love how few people mention ARM being used in the cloud when it has literally saved folks so much money not to mention the planet burns less quickly on ARM.

novok a day ago

I was about to say it might be windows and use linux, since perf benchmarks on windows can be far worse for the same chip than linux, but you are using linux already.

hoppp a day ago

I don't think a fan spinning is negative. The cooling is functioning effectively.

Apple often lets the device throttle before it turns on the fans for "better ux" linux plays no such mind games.

  • xutopia a day ago

    The way the notebooks are built allow for passive cooling, fans are actually quieter and the CPUs run colder at same workloads as proven by cinebench per watt test. It's not just a simple thing.

FloatArtifact a day ago

It's not just the hardware efficiency, but it's also the software stack that's efficient. I'd be curious, macOS versus Linux for battery life testing.

tannhaeuser 2 days ago

I always thought it's Apple's on-package DRAM latency that contributes to its speed relative to x86 especially for local LLM (generative but not necessarily training) usage but with the answers here I'm not so sure.

musicale 2 days ago

> a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.

Note those docker containers are running in a linux VM!

Of course they are on Windows (WSL2) as well.

  • tannhaeuser 2 days ago

    Docker has got to be one of the worst energy consumption offenders given it's running on a heavy VM under a non-Linux OS for developers most of the time when people think it's lightweight. Doesn't help on the high performance side either, esp. ML. Might just be the wrong abstraction, and driven by "cloud vendors" for conveniently (for them!) farming overcomitted servers with ill-partitioned mostly-idle vibe "microservices."

eigenform a day ago

In general, probably co-design with software. Apple is in a position where they design microprocessors that are only going to be running MacOS/iOS.

amadeuspagel a day ago

Intel and AMD have to earn their investments back in one generation, Apple can earn their investments back over a customers lifetime.

ahoka 14 hours ago

FW just probably has shitty thermals.

asno3030 2 days ago

They are pretty similar when comparing the latest amd, and Apple chips on the same node. The buying power from Apple means that they get them earlier than AMD, usually by 6-9 months.

Windows on the other hand is horribly optimized, not only for performance, but also for battery life. You see some better results from Linux, but again it takes a while for all of the optimizations to trickle down.

The tight optimization between the chip, operating system, and targeted compilation all come together to make a tightly integrated product. However comparing raw compute, and efficiency, the AMD products tend to match the capacity of any given node.

3836293648 9 hours ago

It's macos. The M-series isn't that much better anymore. Just look at Asahi linux, you get just as rubbish battery life there as with any windows laptop.

+ What everyone else has already said about node size leads, specific benchmarks, etc

sandreas 2 days ago

In my opinion AMD is on a good way having at least comparable performance to MacBooks copying Apples architectural decisions. Unfortunately their jump on the latest AI Hype Train did not suit them well for efficiency. Ryzen 7840U was significantly more efficient than Ryzen AI 7 350 [1]

However, with AMD Strix Halo aka AMD Ryzen AI Max+ 395 (PRO) there are Notebooks like the ZBook Ultra G1a and Tablets like the Asus ROG Flow Z13, that come close to the MacBook power / performance ratio[2] due to the fact, that they used high bandwidth soldered on memory, which allows for GPUs with shared VRAM similar to Apple's strategy.

Framework did not manage to put this thing in notebook yet, but shipped a Desktop variant. They also pointed out, that there was no way to use LPCAMM2 or any other modular RAM tech with that machine, because it would have slowed it down / increased latencies to an unusable state.

So I'm pretty sure the main reason for Apple's success is the deeply integrated architecture and I'm hopeful that AMD's next generation STRIX Halo APUs might provide this with higher efficiency and hopefully Framework adapts these chips in their notebooks. Maybe they just did in the 16?! Let's wait for this announcement: https://www.youtube.com/watch?v=OZRG7Og61mw

Regarding the deeply thought through integration there is a story I often tell: Apple used to make iPods. These had support for audio playback control with their headphone remotes (e.g. EarPods), which are still available today. These had a proprietary ultra sonic chirp protocol[3] to identify Apple devices and supported volume control and complex playback control actions. You could even navigate through menus via voiceover with longpress and then using the volume buttons to navigate. Until today with their USB-C-to-AudioJack Adapters these still work on nearly every apple device published after 2013 and the wireless earbuds also support parts of this. Android has tried to copy this tiny little engineering wonder, but until today they did not manage to get it working[4]. They instead focus on their proprietary "longpress" should work in our favour and start "hey google" thing, which is ridiculously hard to intercept / override in officially published Android apps... what a shame ;)

1: https://youtu.be/51W0eq7-xrY?t=773

2: https://youtu.be/oyrAur5yYrA

3: https://tinymicros.com/wiki/Apple_iPod_Remote_Protocol

4: https://github.com/androidx/media/issues/2637

  • aurareturn 2 days ago

    AMD’s Strix Halo is still significantly far behind M4 in performance and efficiency. Not even close n

ryao a day ago

Chrome has been very conservative about enabling hardware acceleration features on Linux. Look under about://gpu to see a list. It is possible to force them via command line flags. That said, this is only part of the story.

There are different kinds of transistors that can be used when making chips. There are slow, but efficient transistors and fast, but leaky transistors. Getting an efficient design is a balancing act where you limit use of the fast transistors to only the most performance critical areas. AMD historically has more liberally used these high performance leaky transistors, which enabled it to reach some of the highest clock frequencies in the industry. Apple on the other hand designed for power efficiency first, so its use of such transistors was far more conservative. Rather than use faster transistors, Apple would restrict itself to the slower transistors, but use more of them, resulting in wider core designs that have higher IPC and matched the performance of some of the best AMD designs while using less power. AMD recently adopted some of Apple’s restraint when designing the Zen 5c variant of its architecture, but it is just a modification of a design that was designed for significant use of leaky transistors for high clock speeds:

https://www.tomshardware.com/pc-components/cpus/amd-dishes-m...

The resulting clock speeds of the M4 and the Ryzen AI 340 are surprisingly similar, with the M4 at 4.4GHz and the Ryzen AI 340 at 4.8GHz. That said, the same chip is used in the Ryzen AI 350 that reaches 5.0GHz.

There is also the memory used. Apple uses LPDDR5X on the M4, which runs at lower voltages and has tweaks that sacrifice latency to an extent for a big savings in power. It also is soldered on/close to the CPU/SoC for a reduction needed in power to transmit data to/from the CPU. AMD uses either LPDDR5X or DDR5. I have not kept track of the difference in power usage between DDR versions and their LP variants, but expect the memory to use at least half the power if not less. Memory in many machines can use 5W or more just at idle, so cutting memory power usage can make a big impact.

Additionally, x86 has a decode penalty compared to other architectures. It is often stated that this is negligible, but those statements began during the P4 era when a single core used ~100W where a ~1W power draw for the decoder really was negligible. Fast forward to today where x86 is more complex than ever and people want cores to use 1W or less, the decode penalty is more relevant. ARM, using fixed length instructions and having a fraction of the instructions, uses less power to decode its instructions, since its decoder is simpler. To those who feel compelled to reply to repeat the mantra that this is negligible, please reread what I wrote about it being negligible when cores use 100W each and how the instruction set is more complex now. Let’s say that the instruction decoder uses 250mW for x86 and 50mW for ARM. That 200mW difference is not negligible when you want sub-1W core energy usage. It is at least 20% of the power available to the core. It does become negligible when your cores are each drawing 10W like in AMD’s desktops.

Apple also has taken the design choice of designing its own NAND flash controller and integrating it into its SoC, which provides further power savings by eliminating some of the power overhead associated with an external NAND flash controller. Being integrated into the SoC means that there is no need to waste power on enabling the signals to travel very far, which gives energy savings, versus more standard designs that assume a long distance over a PCB needs to be supported.

Finally, Apple implemented an innovation for timer coalescing in Mavericks that made a fairly big impact:

https://www.imore.com/mavericks-preview-timer-coalescing

On Linux, coalescing is achieved by adding a default 50ms slack to traditional Unix timers. This can be changed, but I have never seen anyone actually do that:

https://man7.org/linux/man-pages/man2/pr_set_timerslack.2con...

That was done to retroactively support coalescing in UNIX/Linux APIs that did not support it (which were all of them). However, Apple made its own new API for event handling called grand central dispatch that exposed coalescing in a very obvious way via the leeway parameter while leaving the UNIX/BSD APIs untouched, and this is now the preferred way of doing event handling on MacOS:

https://developer.apple.com/documentation/dispatch/1385606-d...

Thus, a developer of a background service on MacOS that can tolerate long delays could easily set the slack to multiple seconds, which would essentially guarantee it would be coalesced with some other timer, while a developer of a similar service on Linux, could, but probably will not, since the scheduler slack is something that the developer would need to go out of his way to modify, rather than something in his face like the leeway parameter is with Apple’s API. I did check how this works on Windows. Windows supports a similar per timer delay via SetCoalescableTimer(), but the developer would need to opt into this by using it in place of SetTimer() and it is not clear there is much incentive to use it. To circle back not Chrome, it uses libevent, which uses the BSD kqueue on MacOS. As far as I know, kqueue does not take advantage of timer coalescing on macOS, so the mavericks changes would not benefit chrome very much and the improvements that do benefit chrome are elsewhere. However, I thought that the timer coalescing stuff was worthwhile to mention given that it applies to many other things on MacOS.

Andikusuma a day ago

Anda bisa membatalkan pinjaman KrediVo hubungi call center Kredivo di nmor +62815_4034_985 menjelaskan keinginan Anda untuk membatalkan pinjaman dan ikuti arah verifikasi data dari layanan pelanggan

  • Andikusuma a day ago

    Anda bisa membatalkan pinjaman KrediVo hubungi call center Kredivo di nmor +62815_4034_985 menjelaskan keinginan Anda untuk membatalkan pinjaman dan ikuti arah verifikasi data dari layanan pelanggan

gigatexal 2 days ago

I think the Ryzen ai max 395+ gets really close in terms of performance per watt.

  • aurareturn 2 days ago

    It isn't.

    https://imgur.com/a/yvpEpKF

    In single threaded CPU performance, M4 Pro is roughly 3.6x more efficient while also being 50% faster.

    • gigatexal a day ago

      Then the m5 is going to be even more of a beast.

numpad0 2 days ago

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Is that your metric of performance? If so...

  $ sudo cpufreq-set -u 50MHz
done!
altairprime 2 days ago

M1’s efficiency/thermals performance comes from having hardware-accelerated core system libraries.

Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?

For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.

Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.

I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.

x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel (and apps that want the full performance gains) must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.

That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.

  • wmf 2 days ago

    This isn't really true. Linux doesn't use any magic accelerators yet it runs very fast on Apple Silicon. It's just the best processor.

    • astrange 2 days ago

      P/E cores do benefit from software tuning, but aside from that it's almost all hardware.

      The GPU is significantly different from other desktop GPUs but it's in principle like other mobile GPUs, so not sure how much better Linux could be adapted there.

  • mirekrusin 2 days ago

    macOS releases still work fine on intel based macs.

pengaru 2 days ago

My M1 Macbook Pro I used at work for several months until the Ubuntu Ryzen 7 7840U P14s w/32GB RAM arrived didn't seem particularly amazing.

The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.

I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.

No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.

  • musicale 2 days ago

    > That never happened in MacOS

    Oh you've gotten lucky then. Or somehow disabled crowdstrike.

casey2 a day ago

Hardware performance literally doesn't matter if your software doesn't use it. The more soc-like design of the M series essentially results in an easier time for performance developers. x86 vendors are fighting a losing battle until they change their image of what a x86 based computer should look like. you aren't going to beat apple insiders, x86 vendors have a market opportunity here, but they've had it for 2 decades at this point and they have refused to switch, so they are likely incapable and will die. Sad.

dismalaf a day ago

Apple designs their laptops to throttle power when they warm up too much. Framework gives theirs a fan.

It's a design choice.

Also, different Linux distros/DEs prioritize different things. Generally they prioritize performance over battery life.

That being said, I find Debian GNOME to be the best on battery life. I get 6 hours on an MSI laptop that has an 11th gen Intel processor and a battery with only 70% capacity left. It also stays cool most of the time (except gaming while being plugged in) but it does have a fan...

queenkjuul a day ago

On my ryzen laptop, i have to manually ensure that Linux is setting the right power settings. Once i do that, my 5950HS laptop from 2022 is completely competitive with my work MacBook M2. Louder and hotter at full tilt, but it also has a better GPU (even with the onboard Nvidia turned off) and i can get ~6 hours of web dev out of it if I'm not constantly churning tons of files.

I would try it with Windows for a better comparison, or get into the weeds of getting Linux to handle the ryzen platform power settings better.

With Ubuntu properly managing fans and temps and clocks, I'll take it over the Mac 10/10 times.

moralestapia a day ago

>My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made

Same, I just realized it's three years old, I've used every day for hours and it still feels like the first day I got it.

They truly revindicated on this as their laptops were getting worse and worse and worse (keyboard fiasco, touchbar, ...).

lawn 2 days ago

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

I've got the Framework 13 with the Ryzen 5 7640U and I routinely have dozens of tabs open, including YouTube videos, docker containers, handful of Neovim instances with LSPs and fans or it getting hot have never been a problem (except when I max out the CPU with heavy compilation).

The issue you're seeing isn't because x86 is lacking but something else in your setup.

pharrington 2 days ago

I don't know, but I suspect the builds of the programs you're using play a huge factor in this. Depending on the Linux distro and package management you're using, you just might not be getting programs that are compiled with the latest x86_64 optimizations.

j45 2 days ago

One is more built from the ground up more recently than the other.

Looking beyond Apple/Intel, AMD recently came out with a cpu that shares memory between the GPU and CPU like the M processors.

The Framework is a great laptop - I'd love to drop a mac motherboard into something like that.

wslh 2 days ago

Most probably it is not impacting on Microsoft sales?

dmitrygr 2 days ago

There is one positive to all of this. Finally, we can stop listening to people who keep saying that Apple Silicon is ahead of everyone else because they have access to better process. There are now chips on better processes than M1 that still deliver much worse performance per watt.

  • dapperdrake 2 days ago

    Go down the rabbit hole of broken compiler settings for debian default builds, if you want to see how much low-hanging fruit we still have.

    Who here would be interested in testing a distro like debian with builds optimized for the Framework devices?

    • pabs3 17 hours ago

      Got a link about the Debian issue? Is it just that they build for the CPU baseline to maximise hardware support?

    • wiml 2 days ago

      Should .. should I install gentoo?

      • codr7 2 days ago

        The answer is always yes, continously.

  • Panzer04 2 days ago

    Because of a random anecdote on hackwrnews?

  • bigyabai 2 days ago

    Not sure why you'd think that, comparing a heterogeneous core architecture to a homogeneous one. Mobile Ryzen chips aren't designed for power efficiency, if you want a "fair" comparison then pull up a Big.little x86 chip or benchmark Apple's performance cores vs AMD's mobile chipsets.

    Once you normalize for either efficiency cores or performance cores, you'll quickly realize that the node lead is the largest advantage Apple had. Those guys were right, the writing was on the wall in 2019.

    • dmitrygr 2 days ago

      I guess that’s the new excuse. Except it doesn’t work. I can off-line all the efficiency cores on my M1 laptop and still run circles around the new x86 stuff in performance per watt.

      • bigyabai 2 days ago

        Well don't just tell me about it, show me. Link the Geekbench results when its done running.

mschuster91 2 days ago

> I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge). That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.

Another part of the issue when it comes to cooling is that Apple is virtually the only laptop manufacturer that makes solid full aluminium frames, whereas most x86 laptops are made out of plastic and, for higher-end ones, magnesium alloy. That gives Apple the advantage of being able to use the entire frame to cool the laptop, allowing far more thermal input before saturation occurs and the fans have to activate.

  • Rohansi 2 days ago

    > A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge).

    Why would PCIe SSDs need to go through a southbridge? The CPU itself provides PCIe lanes that can be used directly.

    > That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.

    Swap is slow on all hardware. No SSD comes close to the speed of RAM - not even Apple's. Latency is also significantly worse when you trigger a page fault and then need to wait for the page to load from disk before the thread can resume execution.

    • mschuster91 2 days ago

      > The CPU itself provides PCIe lanes that can be used directly.

      It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU. Laptops are just as bad.

      Intel has always done price segmentation with the number of PCIe lanes exposed to the world.

      Threadripper AMD CPUs are a different game, but I'm not aware of anyone, even "gamer" laptops, sticking such a beast into a portable device.

      > Latency is also significantly worse when you trigger a page fault and then need to wait for the page to load from disk before the thread can resume execution.

      Indeed, but the difference in performance between an 8GB Windows laptop and an 8GB M-series Apple laptop is noticeable, even if all it's running is the base OS and Chrome with a few dozen tabs.

      • Rohansi a day ago

        > It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU. Laptops are just as bad.

        Why would the southbridge need a whole 16 lanes? That's 32 GB/s of bandwidth (or 64, if PCIe 5). My (AMD) motherboard has the GPU and two M.2 sockets connected directly to the CPU and it's one of the cheaper ones. No idea about my laptop but I expect it to be similar because it's also AMD. Intel is obviously different here because they're more stingy with PCIe lanes.

        There should be no reason for a laptop with only an integrated GPU to dangle storage off the southbridge. They take at most 4 lanes and can work with less.

        > Indeed, but the difference in performance between an 8GB Windows laptop and an 8GB M-series Apple laptop is noticeable, even if all it's running is the base OS and Chrome with a few dozen tabs.

        Any Windows laptop that comes with 8GB of RAM is going to have a crappy SSD included because those are always built to be cheap, not performant. It could even be a SATA SSD (500MB/s bandwidth max). Most likely they'd come with a processor significantly slower and a decent chance the RAM would also be single channel, too.

      • cesarb a day ago

        > It does, but if you look at the mainboard manuals of computers, usually it's 32 lanes of which 16 go to the GPU slot and 16 to the southbridge, so no storage directly attached to the CPU.

        AFAIK that's not the case at least on AMD (not Threadripper, but the mainstream AM5 socket). They have 28 lanes of which 16 go to the GPU slot, 4 go to the southbridge, 4 are dedicated to M.2 NVMe storage, and 4 go to either another PCIe slot or another M.2 NVMe storage. See for a random example this motherboard manual https://download.asrock.com/Manual/B650M-HDVM.2.pdf which has a block diagram on page 8 (page 12 of the PDF).

givemeethekeys a day ago

They haven’t beat the low morale out of their workforce yet.

atwrk 2 days ago

To me it simply looks like Apple buys out the first year of every new TSMC node and that is the main reason why the M series is more efficient. Strix Halo (N4P) has, according to Wikipedia, a transistor density about 140 MTr/mm2, while the M4 (N3E) has about 210 MTr/mm2. Isn't the process node alone enough to explain the difference? (+ software optimizations in MacOS of course)

roscas 2 days ago

RISC vs CISC. Why you think a mainframe is so fast?

ARM is great. Those M are the only thing I could buy used and put Linux on it.

  • JustExAWS 2 days ago

    I thought people stopped believing this around 2005 when Apple users finally had to admit that PPC was behind x86.

    Even though this was the case for the most part during the entire history of PPC Macs (I owned two during these years)

    https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

    • marshray 2 days ago

      RISC lost its meaning once SPARC added an integer multiply instruction.

    • platevoltage 2 days ago

      At least my G5 helped keep my room warm in the winter.

    • hajile a day ago

      Cheese and Chips makes some bad arguments in that article.

      Their claim that ARM decoders are just as complex wasn't true then and is even less true now. ARM reduced decoder size 75% from A710 to A715 by dropping legacy 32-bit stuff. Considering that x86 is way more complex than 32-bit ARM, the difference between an x86 and ARM decoder implementation is absolutely massive.

      They abuse the decoder power paper (and that paper also draws a conclusion its own data doesn't support). The data shows that for integers/ALU, some 22% of total core power is used by the decoder for integer/ALU workloads. As 89% of all instructions in the entire Ubuntu repos are just 12 integer/ALU instructions, we can infer that the power cost of the decoder is significant (I'd consider nearly a quarter of the total power budget to be significant anyway).

      The x86 decoder situation has gotten worse with Golden Cove (with 6 decoders) being infamous for its power draw and AMD fearing power draw so much that they opted for a super-complex dual 4-wide decoder setup. If the decoder power didn't matter, they'd be doing 10-wide decoders like the ARM designers.

      The claim that ARM uses uops too is somewhere between a red herring and false equivalency. ARM uops are certainly less complex to create (otherwise they'd have kept around the uop cache) and ARM instructions being inherently less complex means that uop encoding is also going to be more simple for a given uarch compared to x86.

      They then have an argument that proves too much when they say ARM has bloat too. If bloat doesn't matter, why did ARM make an entirely new ISA that ditches backward compatibility? Why take any risk to their ecosystem if there's no reward?

      They also skip over the fact that objectively bad design exists. NOBODY out there defends branch delay slots. They are universally considered an active impediment to high-performance designs with ISAs like MIPS going so far as to create duplicate instructions without branch delay slots in order to speed things up. You can't argue that ISA definitely matters here, but also argue that ISA never makes any difference at all.

      The "all ISAs get bloated over time" is sheer ignorance. x86 has roots going back to the early 1970s before we'd figured out computing. All the basics of CPU design are now stable and haven't really changed in 30+ years. x86 has x87 which has 80-bits because IEEE 754 didn't exist yet. Modern ISAs aren't repeating that mistake. x86 having 8 registers isn't a mistake they are going to make. Neither is 15 different 128-bit SIMD extensions or any of the many other bloated mess-ups x86 has made over the last 50+ years. There may be mistakes, but they are almost certainly going to be on fringe things. In the meantime, the core instructions will continue to be superior to x86 forever.

      They also fail to address implementation complexity. Some of the weirdness of x86 like tighter memory timing gets dragged through the entire system complicating things. If this results in just 10% higher cost and 10% longer development time, that means a RISC company could develop a chip for $5.4B over 4.5 years instead of $6B over 5 years which represents a massive savings and a much lower opportunity cost while giving a compounding head-start on their x86 competitor that can be used to either hit the market sooner or make even larger performance jumps each generation.

      Finally, optimizing something like RISC-V code is inherently easier/faster than optimizing x86 code because there is less weirdness to work around. RISC-V basically just has one way to do something and it'll always be optimized while x86 often has different ways to do the same thing and each has different tradeoffs that make sense in various scenarios.

      As to PPC, Apple didn't sell enough laptops to pay for Motorola to put enough money into the designs to stay competitive.

      Today, Apple macbooks + phones move nearly 220M chips per year. For comparison, total laptop sales last year were around 260M. If Apple had Motorola make a chip today, Motorola would have the money to build a PPC chip that could compete with and surpass what x86 offers.

      • JustExAWS a day ago

        Fair enough.

        And don’t forget that Apple can do things like completely remove all of the hardware that supports 32 bit code and tell developers to just deal with it.

  • alexjplant 2 days ago

    > RISC vs CISC. Why you think a mainframe is so fast?

    This hasn't been true for decades. Mainframes are fast because they have proprietary architectures that are purpose-built for high throughput and redundancy, not because they're RISC. The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.

    Processors are insanely complicated these days. Branch prediction, instruction decoding, micro-ops, reordering, speculative execution, cache tiering strategies... I could go on and on but you get the idea. It's no longer as obvious as "RISC -> orthogonal addressing and short instructions -> speed".

    • musicale 2 days ago

      > The pre-eminent mainframe architecture these days (z/Architecture) is categorized as CISC.

      Very much so. It's largely a register-memory (and indeed memory-memory) rather than load-store architecture, and a direct descendant of the System/360 from 1964.

  • baq 2 days ago

    Everything is RISC after it gets decoded. It isn’t 1990 anymore. The decoder costs maybe 1% performance.

    • hajile a day ago

      In Haswell, 4.8w out of 22.1w for the core were used for the decoder for integer/ALU instructions[0]. According to this[1] analysis of the entire ubuntu repository, 89% of all instructions were composed of just 12 instructions (all integer/ALU).

      From this we can infer that for most normal workloads, almost 22% of the Haswell core power was used in the decoder. As decoders have gotten wider and more complex in recent designs, I see no reason why this wouldn't be just as true for today's CPUs.

      [0] https://www.usenix.org/system/files/conference/cooldc16/cool...

      [1] https://oscarlab.github.io/papers/instrpop-systor19.pdf

      • menaerus 10 hours ago

        Misconstrued arguments.

        First paper is saying that they measured 3% for the floating-point workloads and 10% for the integer workloads.

        > Based on Figure 3, the power consumption of the instruction decoders is very small compared with the other components. Only 3% of the total package power is consumed by the instruction decoding pipeline in this case.

        and

        > As a result, the instruction decoders end up consuming 10% of the total package power in benchmark #2.

        Then the paper continues to say that the benchmark was synthetic and nothing close to the extrapolation of yours that the experiment results would apply if repeated on entire ubuntu repositories.

        > Nevertheless, we would like to point out that this benchmark is completely synthetic.

        And finally paper says that the typical power draw is expected to be much lower in real-world scenarios. Their microbenchmark is basically measuring a 10% as an upper bound for instruction decoder power draw for what would be almost the highest IPC achievable on that machine - Haswell is 4-wide decode machine and 3.86 IPC on real-world code happens never, as they also acknowledge:

        > Real applications typically do not reach IPC counts as high as this. Thus, the power consumption of the instruction decoders is likely less than 10% for real applications.

        If anything can be concluded from this article is that the power draw of instruction decoder is 3% when IPC is 1.67, and this much more closely resembles the IPC figures of real-world programs.

        • hajile 5 hours ago

          There's a lot to break down here. FP/SIMD vs int/ALU, package vs core power, percentage vs total, branching vs unbranching code, average IPC, etc.

          Let's start with package vs core power. Package power is a terrible metric for core efficiency. The CPU cores on an M3 Max peak out at around 50w under a power virus which is around 40-50% of total package power. M3 CPU cores peak out at around 21w with a total package power of somewhere around 25-30w or 70-85% of package power.

          Would you then assert that the M3 Max cores are TWICE as power efficient as the M3 cores? Of course not. They are the exact same CPU core design.

          Package power changes based on the design and target market of the chip. Core power is the ONLY useful metric here. That number indicates that decoders use between 8% and 22% of total core power and this is going to be essentially true whether you are in a 30w TDP or a 300w TDP.

          This ties directly into the percentage vs total hiding the truth. At 4.8w of power for the decoders in one core, an 8-core Haswell 5960x would use 38.4w of its 140w TDP on decoding (or a whopping 27.4% of TDP package power if you still believe that is relevant). On an 18-core server variant, this would theoretically be 86.4w out of a 165w TDP package power. Even if we cut down to the 1.8w you say is reasonable, that's still 14.4w for the 8-core and 32.4w for the 18-core (still 19.6% of package power).

          Not only is this a significant percentage, but it is also significant in absolute watts. Quoting 3% is just an attempt to hide the truth of an ugly situation. Even if the 3% were true, chip companies spend massive amounts of money for less than 1% power savings, so it would still be important.

          Next, let's discuss FP/SIMD vs int/ALU. SIMD takes more execution power than the ALU. This makes sense if you just look at a die shot. All 4 ALUs together are something like 10-20% the size of the SIMD units. When you turn a SIMD unit on, it sucks a lot of power. This is why SIMD throttles so often and Haswell is no exception. The ALUs are executing 2.3x more instructions while using 2.1x more power (which means the SIMD units are aggressively power-gating most of the SIMD execution units).

          Notice the cache differences. SIMD is taxing the L1 cache more (4.8w vs 3.8w) and massively taxing the L2/L3 cache (11.2w vs 0.1w). The chip is hitting its power limits and downclocking the CPU core so it can redirect power to the caches. We see this in the FP code using 4.9w with the larger SIMD units while the ALU code used 10.4w. I'd also note that the power curve matters here because the power doesn't scale linearly with the clockspeed, so reducing the clockspeed has multiplicative effects on reducing decoder power

          If we compute the decode/execution power ratios, we get .37 for SIMD and and .46 for ALU which shows that even in this ideal situation, the relative power draw isn't as good as you are led to believe.

          Finally, there are 4 ALU ports, but only 2 SIMD ports. In practice, this means that half of the decoders will simply not turn on in this test or will turn on long enough to race way ahead in the uop cache then turn off.

          If the core were not downclocking and there were 4 SIMD ports, the decoder power consumption would be higher than 1.8w.

          You are basically correct about average IPC, but wrong about its impacts. SpecInt suite averages around 1.6-1.8 instructions/clock on Haswell[0] and is representative of most code out there (and why ARM designer's focus on very wide chips with very high IPC is important).

          What it misses is branches. The CPU can't wait until a branch happens to start decoding. The branch predictor basically pre-fetches cache lines into I-cache. The decoders then take the next cache blocks and decode them into the uop cache lines. If a branch happens approximately every 5th instruction and cache lines are usually 64 bytes and average x86 instructions are 4.25 bytes long, then you can surmise that both sides of most local branches wind up being decoded even though both are not used. This means that the IPC of the decoders is higher than the IPC of the ALUs.

          In all cases though, it can be stated pretty clearly that x86 decode isn't "free" and has a significant resource cost attached both in relative and absolute terms.

          [0] https://tosiron.com/papers/2018/SPEC2017_ISPASS18.pdf

  • matt_s 2 days ago

    Its fun watching things swing back and forth over time. I remember having those Sun mini-fridge size servers, all running RISC sparc based CPU's if I remember correctly. I wonder if there would be some merit in RISC based linux servers, like maybe the power consumption is lower? I forget the pros/cons of RISC vs CISC CPUs.

chmod775 a day ago

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

Change TDP, TDC, etc. and fan curves if you don't like the thermal behavior. Your Ryzen has low enough power draw that you could even just cool it passively. It has a lower power draw ceiling than your M1 Pro while exceeding it in raw performance.

Also comparing chips based on transistor density is mostly pointless if you don't also mention die size (or cost).

  • rangestransform a day ago

    The only cost that matters is the cost of the final consumer product, it's AMD or Lenovo's fault that they can't afford to eat the cost of a bigger die for a better consumer experience

jarpineh a day ago

I wonder what is the difference between efficiency of MacBook display vs Framework laptop. Whilst CPU and GPU take considerable power they aren't usually working at 100% utilization. Display however has to be using power all the time, possibly at high brightness in daytime. MacBooks have (all?) high resolution displays which should be much power hungrier than Framework 13 IPS. Pro models use mini LED, which needs even more power.

I did ask LLM for some stats about this. According to Claude Sonnet 4 through VS Code (for what that's worth), my Macbook's display can consume same or even more power than CPU does for "office work". Yet my M1 Max 16" seems to last a good while longer than whatever it was I got from work this year. I'd like to know how those stats are produced (or are they hallucinated...). There doesn't seem to be a way to get display's power usage in M series Macs. So, you'd need to devise a testing regime for display off and display on 100% brightness to get some indication of its effect on power use.