Same. This is a surprisingly simple recipe for a happy life and helps prevent lifestyle inflation. It reminds me of PG's "Keep your identify small" (https://paulgraham.com/identity.html).
That's fair, but it wasn't the point of the article because it's messy. Many would argue that core LLMs are 'trending' toward commodity, and I'd agree.
But it's complicated because commodities don't carry brand weight, yet there's obviously a brand power law. I (like most other people) use ChatGPT. But for coding I use Claude and a bit of Gemini, etc. depending on the problem. If they were complete commodities, it wouldn't matter much what I used.
A part of the issue here is that while LLMs may be trending toward commodity, "AI" isn't. As more people use AI, they get locked into their habits, memory (customization), ecosystem, etc. And as AI improves if everything I do has less and less to do with the hardware and I care more about everything else, then the hardware (e.g. iPhone) becomes the commodity.
Similar with AWS if data/workflow/memory/lock-in becomes the moat I'll want everything where the rest of my infra is.
Your comment on Intel is correct, but it's also true that TSMC could invest billions into advanced fabs because Apple gave them a huge guaranteed demand base. Intel didn’t have the same economic flywheel since PCs/servers were flat or declinig.
That's a good clarification on Amazon, running on commodity hardware with competitive pricing != competing on price alone. It would have been better to clarify this difference when pointing out that they're trying the same commodity approach in AI.
Amazon is doing exactly everything except competing on price and they don't run commodity hardware either. They're even developing their own chips. Sure they have "commodity" GPUs and CPUs in some lineups, but they also have Graviton.
If you get something this mundane wrong from the start I don't know how I could trust anything else from the post either.
True, but Apple is a consumer hardware company, which requires billions of users at their scale.
We may care about running LLMs locally, but 99% of consumers don't. They want the easiest/cheapest path, which will always be the cloud models. Spending ~$6k (what my M4 Max cost) every N years since models/HW keep improving to be able to run a somewhat decent model locally just isn't a consumer thing. Nonviable for a consumer hardware business at Apple's scale.
I'm somewhat bullish on Google as well, they have the opportunity if they can figure out the product (which they are bad at) and they have the edge in cloud with their models + TPUs.
But your comment about the phone could have been about horses, or the notepad or any other technology paradigm we were used to in the past. Maybe it'll take a decade for the 'perfect' AI form factor to emerge, but it's unlikely to remain unchanged.
Yes brain chips and implants will be the next form factor. Until then the slab of battery and screen in your pocket is going to be present (and probably remain for a while even after we get brain implants).
Right, but remember Microsoft was 'working on' mobile also. The issue is that they're working on it the wrong way. Amazon is focused on price and treating it like a commodity. Apple trying to keep the iPhone at the center of everything. Thus neither are fully committing to the paradigm shift because they say it is, but not acting like it because their existing strategy/culture precludes them from doing so.
> The issue is that they're working on it the wrong way.
So is everyone else, to be fair. Chat is a horrible way to interact with computers — and even if we accept worse is better its only viable future is to include ads in the responses. That isn't a game Apple is going to want to play. They are a hardware company.
More likely someday we'll get the "iPhone moment" when we realize all previous efforts were misguided. Can Apple rise up then? That remains to be seen, but it will likely be someone unexpected. Look at any successful business venture and the eventual "winner" is usually someone who sat back and watched all the mistakes be made first.
We begrudgingly accept chat as the lowest common denominator when there is no better option, but it's clear we don't prefer it when better options are available. Just look in any fast food restaurant that has adopted those ordering terminals and see how many are still lining up at the counter to chat with the cashier... In fact, McDonalds found that their sales rose by 30% when they eliminated chatting from the process, so clearly people found it to be a hinderance.
We don't know what is better for this technology yet, so it stands to reason that we reverted to the lowest common denominator again, but there is no reason why we will or will want to stay there. Someone is bound to figure out a better way. Maybe even Apple. That business was built on being late to the party. Although, granted, it remains to be seen if that is something it can continue with absent of Jobs.
What is representative, though, is simple use: All you have to do is use chat to see how awful it is.
It is better than nothing. It is arguably the best we have right now to make use of the technology. But, unless this is AI thing is all hype and goes nowhere, smart minds aren't going to sit idle as progression moves towards maturity.
The problem with UX driven by this kind of interface is latency. Right now, this kind of flow goes more like:
"What burgers do you have?"
(Thinking...)
(4 seconds later:)
(expands to show a set of pictures)
"Sigh. I'll have the thing with chicken and lettuce"
(Thinking...)
(3 seconds later:)
> "Do you mean the Crispy McChicken TM McSandwich TM?"
"Yes"
(Thinking...)
(4 seconds later:)
> "Would you like anything else?"
"No"
(Thinking...)
(5 seconds later:)
> "Would you like to supersize that?"
"Is there a human I can speak with? Or perhaps I can just point and grunt to one of the workers behind the counter? Anyone?"
It's just exasperating, and it's not easy to overcome until local inference is cheap and common. Even if you do voice recognition on the kiosk, which probably works well enough these days, there's still the round trip to OpenAI and then the inference time there. And of course, this whole scenario gets even worse and more frustrating anywhere with subpar internet.
Right. We talk when it is the only viable choice in front of us, but as soon as options are available, talk goes out the window pretty quickly. It is not our ideal mode of communication, just the lowest common denominator that works in most situations.
But, now, remember, unlike humans, AI can do things like materialize diagrams and pictures out of "thin air" and can even make them interactive right on the spot. It can also do a whole lot of things that you and I haven't even thought of yet. It is not bound by the same limitations of the human mind and body. It is not human.
For what reason is there to think that chat will remain the primary mode of using this technology? It is the easiest to conceive of way to use the technology, so it is unsurprising that it is what we got first, but why would we stop here? Chat works, but it is not good. There are so many unexplored possibilities to find better and we're just getting started.
I think chat will remain dominant, but we'll go into other modes as needed. There's no more efficient way to communicate "show me the burgers" than saying it - thinking it is possible, but sending thoughts is too far off right now. Then you switch to imagery or hand gestures or whatever else when they're a better way to show something.
> Chat is a horrible way to interact with computers
Chat is like the command line, but with easier syntax. This makes it usable by an order of magnitude more people.
Entertainment tasks lend themselves well to GUI type interfaces. Information retrieval and manipulation tasks will probably be better with chat type interfaces. Command and control are also better with chat or voice (beyond the 4-6 most common controls that can be displayed on a GUI).
> Chat is like the command line, but with easier syntax.
I kinda disagree with this analogy.
The command line is precise, concise, and opaque. If you know the right incantations, you can do some really powerful things really quickly. Some people understand the rules behind it, and so can be incredibly efficient with it. Most don't, though.
Chat with LLMs is fuzzy, slow-and-iterative... and differently opaque. You don't need to know how the system works, but you can probably approach something powerful if you accept a certain amount of saying "close, but don't delete files that end in y".
The "differently-opaque" for LLM chatbots comes in you needing to ultimately trust that the system is going to get it right based on what you said. The command line will do exactly what you told it to, if you know enough to understand what you told it to. The chatbot will do... something that's probably related to what you told it to, and might be what it did last time you asked for the same thing, or might not.
For a lot of people the chatbot experience is undeniably better, or at least lets them attempt things they'd never have even approached with the raw command line.
Exactly. Nobody really wants to use the command-line as the primary mode of computing; even the experts who know how to use it well. People will accept it when there is no better tool for the job, but it is not going to become the preferred way to use computers again no matter how much easier it is to use this time. We didn't move away from the command-line simply because it required some specialized knowledge to use.
Chatting with LLMs looks pretty good right now because we haven't yet figured out a better way, but there is no reason to think we won't figure out a better way. Almost certainly people will revert to chat for certain tasks, like people still use the command-line even today, but it won't be the primary mode of computing like the current crop of services are betting on. This technology is much too valuable for it to stay locked in shitty chat clients (and especially shitty chat clients serving advertisements, which is the inevitable future for these businesses betting on chat — they can't keep haemorrhaging money forever and individuals won't pay enough for a software service).
My experience with Claude Code is a fantastic way to interact with a (limited subset) of my computer. I do not think Claude is too far off from being able to do stuff like read my texts, emails, and calendar and take actions in those apps, which is pretty much what people want Siri to (reliably) do these days.
I've started using X a lot more in the last few months since I built an app that let's you track habits with a tweet called Xtreeks (yeah, I know..).
I enjoy the product, but wish they'd spend more time making the core elements of the product work. For example, aspects of the API just don't work as expected, like for some reason search and mention endpoints do not have support for long form posts (>280 characters) enough though X supports posts with thousands of characters. The result is the API appears to work for some posts and just silently fails for others.
In addition to the API issues, we've struggled with inexplicable labeling/suspension and shadow banning, even on the personal account I've had for over a decade (seemed to be triggered by using my VPN). I understand the desire to control spam, but it seems excessive. Or if you do it excessively, at least provide adequate tools/support to request review.
On my app's X account I paid for both API access ($200/mo) and the Verified Org status ($2,000) and had a hard time getting support that took days to reply, when it did reply at all. And when the person replied they had nothing to do with the account label process, so weren't able to help, which was quite frustrating. It was fine since this was a little side project, but if this was a business at scale and I was paying that much in addition to ad spend I'd be furious.
Anyway, I know nothing about the Head of Eng or what's at the root of these issues, but I'm a big fan of X and hope they're able to fix these things. It's such. valuable tool. I'm even fine if it's pay to play, but if someone is on the higher tiers of your paid plans the support should be available when they need it.
> For example, aspects of the API just don't work as expected, like for some reason search and mention endpoints do not have support for long form posts (>280 characters)
At the risk of sounding blunt, they probably fired the people who actually built the API at Twitter and it's now (internally) in maintenance mode. They very openly don't care about giving other companies data -- even with the new pricey API tiers -- or making the non-app/-website experience good at all. I doubt any of the new item types -- Articles, ordered media in tweets, etc. -- even have an API exposed.
> I'm a big fan of X and hope they're able to fix these things. It's such. valuable tool.
Genuine question: what value are people finding on X these days? I’m not saying it doesn’t exist, but I gave up on the site due to the rampant toxicity awhile ago. Are people just highly curating their feeds?
It started to feel like going to a trash dump with the intention of finding things that aren’t trash. Perhaps possible, but extremely smelly and occasionally the cause of a bed bug infestation.
1. Good ML accounts that allow me to stay up to date. Didn't find such concentration of them anywhere else, even after some have left. This is what mostly keeps me there.
2. Current info about the russian aggression from a few trusted accounts, mostly in ukrainian. Again, as I don't want to use Telegram, it's hard to find those in other places, though some good ones are also on Mastodon and Bsky.
> Genuine question: what value are people finding on X these days? I’m not saying it doesn’t exist, but I gave up on the site due to the rampant toxicity awhile ago. Are people just highly curating their feeds?
There are still a few Nitter instances running, so I use a web browser to check up on a few individuals (journalists, economists) sporadically.
(I don't have any apps from The Socials installed on my phone (and never have): always web-based lurking.)
> Is "I don't have social media apps on my phone" the new "I don't even own a tv"?
I know myself well enough to know that it'd be a bad idea for me. I have enough issues with screen time as it is, I don't need it in my pocket as well.
>I enjoy the product, but wish they'd spend more time making the core elements of the product work.
I think it might be worth stepping back and thinking about what the core elements of the product is. Not just with X, but with with all social media.
Personally I've migrated from thinking 'damn this is broken' to 'oh thats a feature not a bug'. I'm embarassed how long this really took me across a lot of platforms, but when I stopped thinking about my needs as driving product design decisions it all was less frustrating (albeit more disturbing).
For context in response to the questions about "Why build on X?"
I've be working to relearn math and there happens to be a large group of others also doing the same with Math Academy and sharing daily updates on X.
I found this inspiring (especially as the lessons got harder) so I tweeting my updates too. But I also wanted a way to independently track my progress across math and other areas to see progress over time, even if I changed tools or stopped tweeting.
So that's the reason I built app on X: So my tweets get logged in a GitHub-like habit graph to show progress over time. It just pulled my bio/profile from X (login with X) and tracks my habit tweets. It's super simple, but meets my needs perfectly. My habit page: https://xtreeks.com/gabemays
I understand the questions around the long-term stability of the API, but I'm optimistic.
Why did you choose to build it on top of X? X is no longer Twitter, it's Elon Musk's vehicle for his personal political agenda. Setting aside whether you are aligned with Musk's political views, it seems like a poor decision to align yourself to a platform that is so explicitly not intended to be used the way you're using it. You should take this as a sign to fix your dependency on X. Musk doesn't care about what you're doing, he cares about his political agenda, he will happily destroy what you're doing if it were to benefit him.
Good luck! You should check out Math Academy, it's more effective/efficient/cheaper but also a good supplement since it's accredited.
I recently turned 40 myself and I'm working through their Foundations courses (made to help adults catch up) before tackling the Machine Learning and other uni courses.
I'll tell you my experience as someone who's been using Math Academy for past 6 months.
Math Academy does what every good application or service does. Make things convenient. That's it. No juggling heavy books or multiple tabs of PDFs. Each problem comes with detailed solution so getting them wrong doesn't mean looking around on the internet for a hint about your mistake (this is pre ChatGPT era of course, where not getting something correct meant putting down MathJax on stackexchange).
> better than just prompting ChatGPT/Claude/etc
The convenience means you are doing the most important part of learning maths with most ease: problem solving and practice. That is something an LLM will not be able to help you with. For me, solving problems is pretty much the only way to mostly wrap my head around the topic.
I say mostly because LLMs are amazing at complementing Math Academy. Any time I hit a conceptual snag, I run off to ChatGPT to get more clarity. And it works great.
So in my opinion, Math Academy alone is pretty good. Even great for school level maths I'd say. Coupled with ChatGPT the package becomes a pretty solid teaching medium.
Yes, much better. ChatGPT/Claude/etc. are useful the times I want extra explanation to help connect the dots, but Math Academy incorporates spaced repetition, interleaving, etc. the way a dedicated tutor would, but in a better structured environment/UI.
Their marketing website leaves a lot to be desired (a perk since they are all math nerds focused on the product), but here are two references on their site that explain their approach:
They also did a really good interview last week that goes in depth about their process with Dr. Alex Smith (Director of Curriculum) and Justin Skycak (Director of Analytics) from Math Academy: https://chalkandtalkpodcast.podbean.com/e/math-academy-optim...
I used an early e-learning platform not because I wanted to but because I was one of its developers. I didn't develop the course-content just the technical implementation.
What I didn't like about the content is I often had questions about it but there was no-one to ask the questions from. Whoever wrote that material was no longer around. It's a frustrating feeling when you can't really trust what you're studying is factually correct, or is misleading.
I assume AI will have a huge improvement in this respect.
The second link really impressed me, I'm tentatively sold on (and excited for) their approach. Does anyone know of any other accredited programs similar to Math Academy, but for other subjects?
Anything in the soft sciences, or biology/organic chemistry, or comp sci. I know there are a lot of courses for the latter especially, but I'm looking for accredited ones.
Not OP, but I have found MathAcademy to be infinitely better. I really liked the assessment portion which levels you and gives you an idea of where you are are at the present. As someone who graduated with an engineering degree a while ago, there were things I realized I didn’t know as well as I thought I did and I probably would not have prompted an LLM to review.
Math is something that should be taught in an opinionated way with an eye toward pedagogy. Self study with GPT is an excellent tool in math, but only for those who have enough perspective to know which directions to set out on. I don’t think anybody who doesn’t know linear algebra should be guiding their studies themselves.
Given my ChatGPT and friends experience has been one of overwhelming frustration due to incorrect information, I would say Math Academy is in an entirely different galaxy. ChatGPT is great if you want to learn that pi is equal to 4.
b-b-b-but the next supercalifragilistic ChatGPT version will be able to tell you that pi is between 3.1 and 3.2. that will be a Quantum improvement, asymptotically close to AGI.
at least, i think i heard alt samman say so.
you plebs and proles better shell out the $50 a month, increasing by $10 per day, to keep dis honest billionaires able to keep on buying deir multi-million dollar yachts and personal jets.
be grateful for the valuable crumbs we toss to you, serfs.
Not yet, but now I will, just out of curiosity.
There's a problem with mathematicians teaching the subject. After all, the youtube lectures were also given by mathematicians. In attempt to make things "accessible", they de-emphasize the algebraic part of the subject and replace it with... I don't know what. The common theme is to consider only R^n. That's not what it's about.
Maybe Math Academy course is different though.
That's not a "mathematician" thing, it's a US thing. US universities, for some reason, insist on teaching mathematics twice, once with lots of handwaving and then at some point you get to do a "proof-based course".
In Europe (at least in certain countries, can't speak to all of them), maths lectures will typically be abstract and proof-based from day 1 - at least for maths majors (but frequently for CS and physics students too). Other majors, such as economics and maybe engineering, may get their own lectures that tend to be more hand-wavey because they don't necessarily need the axioms of real numbers to take a derivative here and there.
My linear algebra course was algebra and proof based to the extent that maybe a little bit more geometric intuition would have helped.