There a lot of discussion here https://retrocomputing.stackexchange.com/questions/7412/why-... but nothing seems conclusive.. I would wager the last answer, "IBM was using 400Hz", to be most directly causal reason. The motor-generator configuration might provide galvanic isolation and some immunity to spikes and transients as well?
Smaller transformers and capacitors in all the linear power supplies.
400Hz is still common in aircraft. Distribution losses are higher, but you're going across the room, not across the country.
There are very survivable fiber cables designed for stage and A/V setups for instance, and even "real" military grade ones. But the common thin LZSH stuff is surprisingly resilient in my experience, so long as it didn't kink the OP would probably have been fine with a "standard" cable. In any case I would always try to place fiber in a place where it can be re-pulled.. conduit, tray, or a plenum.
I bought a big spool of 6 strand Corning stuff a long time ago for various projects, the cost and diameter don't increase much to add some protection lines even if you never imagine using them they can save you a re-pull if you bugger something up in construction.
The contractors were probably dubious of that resiliency given their lack of experience. I recently ran fibre in my house and have to say I was pleasantly surprised that fiber patch cables (unarmored) can survive a good pull through a duct.
They can survive a 500m pull (in 100m stages, or the friction is too high) in mud and rain, through active water line.
Honestly, fibre, even unarmoured with just a standard Kevlar & HDPE sleeve is hardy stuff. When I first started mucking around with it a few years ago I was like “don’t breathe on it too hard”, now I’m like “tie the fibre in a knot on the bullbar and pull it with the truck”.
> Honestly, fibre, even unarmoured with just a standard Kevlar & HDPE sleeve is hardy stuff.
To be fair, it also got a lot better in the last 20–30 years. In particular, we now have bend-insensitive fiber for the last mile (G.657.A1/G.657.A2) and in general, we just figured out how to make it more robust.
You can kink the shit out of fibre and it’s fine. Like, I’ve accidentally managed ~15mm diameter loops while pulling and then proceeded to yank on them. The Kevlar takes the brunt. Only time I broke a fibre was when it was me and two other guys pulling on it as hard as possible - and instead of moving it went “ping”.
LSSM is really top notch. It's grass roots, done by volunteers that want to preserve and share experience with these machines.
Most museums, I'll pick on CHM as an example but it applies to basically any metropolitan museum: by contrast are quite sterile, you can tell they have a ton of money but it's the standard impressive architecture and displays setup that is designed to ferry large groups through relatively quickly but don't impart much wisdom on the participants.
I never got a chance to visit Living Computer Museum but I wonder if that met some kind of high funding to be able to service masses while still going deep hands on.
That wasn’t my impression of the Computer History Museum in Mountain View. (Assuming that’s the CHM you mention.) I haven’t been in maybe 10 years, though. Have things changed?
I went to both years ago, and did enjoy LCM better. The difference is that LCM was ectremely hands-on. They had all kinds of rare machines out on the floor that you could just...play with. Imagine using an original Lisa running XENIX of all things, then firing up MazeWar on an Imlac.
CHM is very well done but more of a traditional museum with limited, curated interactivity.
The huge page article is sequitur with official documentation like https://docs.redhat.com/en/documentation/red_hat_enterprise_.... THP can only issue up to 2MB pages on amd64 so it's not necessarily a silver bullet for large persistent consumers like a DB or GC language and worth knowing about the older methods.
To me they look like marketing posts, but they aren't void of effort or meaning as a quick intro to various topics.
Is this a performance art where you do the thing you accuse? "malapropism" is a five dollar word if "sequitur" is. The use tracks with the Latin or English definitions, what does any of this have to do with sequential? I imply the article is probably not simple AI slop because it follows official documentation. Add "a" in front of it if your worth is determined by neckbearding a borrowed verb that can only noun in the lease.
Seemed like a pivotal time in IBM's history. IBM in 1993 was looking face to face with irrelevance during his tenure after mainframes being evergreen declared relics, and losing the bus wars in the PC industry. IBM in 2002 was still an interesting R&D and products company. Unfortunately the talent bleed off has been continuous from that time and neither the R&D nor the products are as astonishing versus the competition as they used to be. At least no follow on CEO has been daft enough to undercut the mainframe business so far, but they did miss the timing and limp execution on plenty of things.. POWER8 was almost perfectly positioned to be the AI interconnect and glue of choice.
Gerstner changed the culture, but there was perhaps no greater and more startling visible change than:
"IBM withdrew from the retail desktop PC market entirely, which had become unprofitable due to price pressures in the early 2000s. Three years after Gerstner's 2002 retirement, IBM sold the PC division to Lenovo." [1]
Probably nobody here is an Oracle fan but the miss on sentiment like this is you could have written the same comment minus OpenAI 10 and maybe even 20 years ago.
Definitely true, but a lot of Oracle sites are that way because of decisions made decades ago. Opportunities to re-architect are rare. But when those opportunities do come along, nobody is choosing Oracle RDBMS for their future state.
What I do see is orgs choosing other Oracle apps like ERP which sneak the Oracle RDBMS in as part of the bundle.
Anyone using Oracle purely as a database is going to migrate to PostgreSQL eventually, but there are a lot of orgs where the database is just one part of a wider Oracle ecosystem with world-class vendor lock-in features.
They have some funny accounting like Google and Microsoft where everything is "cloud" but the revenue streams are certainly diversified from straight Oracle DB such that PostgreSQL equivalence or superiority does not affect the viability of the company or the stock price. Communities like this often over index technical and personal opinion with reality.
I worked at a midsize that was core internet infra, where we had an in house OS and ODM hardware and FOSS DBAs. The one Oracle DB and Oracle HW was slipped in the door through finance for ERP as you say. Although I suspect that would be cloud hosted these days.
Yup "could shape".. I mean this has been going on time immemorial.
It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.
The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.
Your example is weird tbh. Gates was doing capitalist things that were evil. His philanthropy is good. There is no contradiction here. People can do good and bad things.
The Megahertz wars in the 1990s made it really difficult to understand relative performance across even the same ISA like this, and I think computers with the 603 CPU were a bit of a wrench in people's perception of the Mac.
The 180 or 200MHz 603e with 16k L1 cache in that Performa 6400 wasn't slow by any stretch, but it probably didn't have L2 cache. Coupled with the gradual transition to PPC native code of the OS and apps, these machines were often a little mismatched to expectations and realities of the code.
Meanwhile that PowerTower had a 604e with 32/32k L1 and 1MB L2 cache. That was a fast flier with a superscalar and out of order pipeline more comparable to the Pentium Pro and PII.
I think GCC is the real shining example of a GPL success, it broke through a rut of high cost developer tooling in the 1990s and became the de facto compiler for UNIX and embedded BSPs (Board Support Packages) while training corporations on how to deal with all this.
But then LLVM showed up and showed it is no longer imperative to have a viral license to sustain corporate OSS. That might've not been possible without the land clearing GCC accomplished, but times are different now and corporations have a better understanding and relationship with OSS.
The GPL has enough area to opt out of contributing (i.e. services businesses or just stacking on immense complexity in a BSP so as to ensure vendor lockin) that it isn't a defining concern for most users.
Therefore I don't think Linux' success has much to do with GPL. It has been effective in the BSP space, but the main parts most people care about and associate with Linux could easily be MIT with no significant consequence on velocity and participation. In fact, a lot of the DRM code (graphics drivers) are dual-licensed thusly.
> But then LLVM showed up and showed it is no longer imperative to have a viral license
I am not sure I remember everything right, but as far as I remember Apple originally maintained a fork of gcc for its objective-c language and didn't provide clean patches upstream, instead it threw its weight behind LLVM the moment it became even remotely viable so it could avoid the issue entirely.
Also gcc didn't provide APIs for IDE integration early on, causing significant issues with attempts to implement features like refactoring support on top of it. People had the choice of either using llvm, half ass it with ctags or stick with plain text search and replace like RMS intended.
reply