The Ethereum protocol was designed with PoS in mind and has a built-in difficulty bomb[1] to prove it. In short, this difficulty bomb makes it exponentially harder to mine ETH over time. The goal of this feature was to encourage all participants of the ecosystem to transition to PoS as quickly as possible.
Given that, the implementation has not worked out totally as expected, as the difficulty bomb has been pushed back a few times over the years. However, to answer your question, the reason they did not move faster is because this transition is hard and plays in some uncharted territory.
> Note that in the future, it is likely that Ethereum will switch to a proof-of-stake model for security, reducing the issuance requirement to somewhere between zero and 0.05X per year.
It absolutely was, Peercoin (PPC), and even DPoS (BTS) predates ethereum too. What are they worth today? PoS is a scam -- also Diem is worse -- it is dystopia.
I hear this argument a lot and have a genuine question.
How does Bitcoin electricity usage compare to something that achieves a similar goal?
A good example is gold—many people compare Bitcoin to gold. What are the relative electricity costs of the two, and does that justify the cost of either asset? This would take into account the electricity costs of mining, labor, supply chain, storage, etc.
It's easy to quantify with Bitcoin, less so than gold. Transportation, security, all those things lead to gold being an inefficient store of value. At least with cryptocurrencies, many are being actively worked on that contribute much less to energy expenditure.
I think we should consider the possibility that Bitcoin greatly incentivizes saving and disincentivizes consumption, at least while prices are increasing as quickly as they are.
If people are unemployed or underemployed and you save regardless that waste of productive work doesn't reappear through magic in the future. It's just gone.
Visa operates out of just two datacenters. They are famously secretive about their operations so I don't know how much power they use. The most I've found is that they have 'multi-megawatt' datacenters, which means they probably use more than three megawatts and less than 100.
Bitcoin uses about 8.5 gigawatts. Clearly it uses at least a few orders of magnitude more electricity than Visa. It can process four transactions per second compared to about 25,000 for Visa, so the energy per transaction is even worse by a few more orders of magnitude.
The inefficiency of Bitcoin is truly mind-boggling. If I were trying to write a hit piece, I wouldn't have the guts to make up numbers as bad as the reality.
What about all the cascading costs to run all non-crypto financial systems? Not just data centers, but everything related like infrastructure, real estate, credit card readers, point of sale systems, office furniture, salaries, etc... The list goes on and on. If crypto was used globally, much of these costs and energy waste would disappear.
Also, some currencies like Ethereum are moving to proof of stake, which requires less energy.
> cryptocurrency has a finite volume of transferrable currency
This is true of Bitcoin. Others, such as Ethereum, have a known inflation schedule, which would help alleviate the issue you mentioned as being a currency of the future. Furthermore, it is looking like the "currency of the future" may (at least in the short term), look more like a cryptocurrency that is worth $1 and is 1:1 backed by a dollar in a bank account. These are known as stablecoins and are an in-demand topic right now for central banks around the world.
Bitcoin has a property of having a known, fixed supply. This allows it to serve as a store of value. It can and is used for day-to-day transactions, but there may need to be more advancements to make these types of transactions viable long-term (such as the lightning network).
Ethereum doesn't have a known inflation schedule. It has an arbitrary inflation schedule that is decided by the Ethereum developers.
The idea of cryptocurrency is to replace fiat currencies, with an engineered system instead of a political system. There's no benefit in a cryptocurrency that's backed by a fiat currency. It's like building a stone house on a swamp.
Couldn’t a government just destroy a stable coin by seizing the currency backing it? Isn’t the point of a cryptocurrency supposed to be that it’s not controlled by any government?
I don’t know if a government today could just seize crypto, but the USA did so with gold in the past. Imagine owning a whole bunch of gold, and then waking up one morning & finding out it’d been replaced with US Dollars.
A stable coin doesn’t necessarily have to be backed by fiat, DAI is backed by crypto assets and those assets can fluctuate and sometimes need to be increased during volatility.
I think the point of fiat backed stable coins are more for getting around regulations (eg Tether) and a fiat on-ramp for getting into the ecosystem. Certainly if the government seized a large portion of Tethers assets then that could affect other assets, but this actually has already happened with (a small portion, not large) Tether and nothing catastrophic happened.
> Telegram supports end-to-end encryption ("secret chats") with no logging -- as far as I know there is no proof that these chats are untrustworthy.
The argument I've heard is that Telegram uses their own encryption protocol. The rule of thumb in cryptography is "don't roll your own crypto".
The reason why that statement exists is because there are _countless_ examples of teams coming up with their own, new cryptographic mechanisms that either break (intentionally or not) or were written with a backdoor. People get incredibly clever when it comes to breaking encryption.
AFAIK the only way to be on the right side of this argument is to use a time-tested encryption protocol. However, there are even instances where some protocols have been live and in production for x years before discovering that a backdoor has been in the code since day one.
> The rule of thumb in cryptography is "don't roll your own crypto".
This phrase is tiring to hear in this form, and your understanding seems to be incomplete here. Signal also rolled its own crypto, but you don’t see anyone saying it’s insecure for that reason. That phrase is used to tell non-cryptographers not to roll their own crypto because of the high chances of vulnerabilities being introduced. In the case of Telegram, the company defends its protocol saying that it’s been created by people with PhD in mathematics (which is related to and is foundational for, but different from, cryptography). Telegram’s encryption protocol (the second version) has not been broken by anyone till date.
>In the case of Telegram, the company defends its protocol saying that it’s been created by people with PhD in mathematics (which is related to and is foundational for, but different from, cryptography).
It was created by Nikolai Durov who has a PhD in geometry. That's like a gynaecologist performing brain surgery. Specialization matters. Sure both took human anatomy 101 class in college, but somewhere along the way they went and spend their ENTIRE career doing different things. It's easier to get another decree in medical science sure, but in this case the gynaecologist did not, they just started cutting the brain with kitchen knife and just because their patients haven't died yet doesn't mean they have the credentials to abandon best practices.
MTProto 1 was the problematic protocol that continues to haunt Telegram despite its deprecation for MTProto 2 which is built on standard crypto primitives.
This is a fun phrase, that as a non-crypto person seems reasonable, but I always wonder if there's something of a confirmation bias.
> The reason why that statement exists is because there are _countless_ examples of teams coming up with their own, new cryptographic mechanisms that either break...
But aren't there _countless_ examples of this in crypto made by cryptographers?
I'm not playing devil's advocate, I don't really have a stake here. :)
Not a crypto expert either, but from what I've gleaned listening to e.g. Peter Guttman describe evaluating new crypto mechanisms, you'll see that:
1. Actual cryptographers usually design with a set of constraints that make their crypto work: those might be about compute power, or memory bandwidth, or what have you, that make an algorithm difficult to brute force.
2. The algorithm will typically be peer-reviewed to try to weed out mistakes, either fundamental mathematical ones, or in the assumptions.
3. The implementation then needs to be high quality.
There are certainly no shortage of examples where systems which pass 1 & 2 are undermined by failures in 3. All algorithms are susceptible to the context around 1 changing (changes in compute power or whatever).
When you go it alone, you're assuming that you won't make any mistakes any of these. That seems a pretty tall order.
What really sets cryptography apart is that for a non-expert, there is no way to tell whether it's correct or not. Most bad software has bugs that can be found by users. A bad ML model will do poorly in validation.
But a bad crypto implementation will work. For all intents and purposes, it will appear completely fine. Users will get their messages. The bitstream will appear completely random. At least, until somebody with expertise in breaking crypto systems digs into it.
>But a bad crypto implementation will work. For all intents and purposes, it will appear completely fine. Users will get their messages. The bitstream will appear completely random. At least, until somebody with expertise in breaking crypto systems digs into it.
And that applies even if they're using AES but wrong mode of operation. That applies even if they're using best practices like AES-GCM but the CPU doesn't support AES-NI and a cache timing attack allows key exfiltration.
Like Swiftonsecurity wrote:
"Cryptography is nightmare magic math that cares what kind of pen you use. Should math care what kind of pen you use to implement it? No, but Fuck You, this is Cryptography."
The attacks are incredibly subtle for even the best systems, and Telegram is so far away from even adaquate it's difficult to emphasize it so I'll try with my best restraint:
TELEGRAM FUCKING LEAKS EVERY GROUP MESSAGE TO THE SERVER WHICH IS THE EXACT EQUIVALENT OF A FUCKING BACK DOOR.
Group messages on Telegram and normal messages are explicitly not encrypted in order to allow multi-device operation. That is explicit. I don't see how that has anything to do with the security of MTProto.
Also notable is that it can't be fixed or patched in the way you'd expect for any other software -- once it's found broken, everything that ever used it is now broken unless they're re-encrypted. There's no migration path to the fixed version
Assuming that money is what they’re after. Are you reading Durov’s channel on Telegram? Also, having invented the Russian Facebook and forcefully selling it to the Kreml - I don’t think he needs any more money. He’s playing a totally different game. I don’t know which one though.
The thing about crypto is that it works in pieces. I'm sure it takes 12+ libraries to make an end-to-end encryption work. Signal has this same problem.
This seems like a great way for newcomers to learn the basics of `git` though GitHub. A great move by GitHub to try to capture this market of users who may not yet be comfortable with subversion control.
Yeah, if a student is taught Github as part of their curriculum they're more likely to stick with it.
I can't really knock it, but at the same time, this has been Microsoft's strategy for a long time, with students learning Office in school but no competitors, so they take it with them in their private and professional life.
It's a difficult one though, given that both github and office are pretty standardized / ubiquitous everywhere.
Yes, and this was a big part of the success. If someone else had built it, they might not have used it, which would have given it a much rougher time adoption wise.
Aha, I wouldn’t worry too much about it. 20 years ago CVS was the new hotness in my RCS using lab. 10 years ago it was SVN. We’re due for a seismic shift soon.
RegExr [0] does a great job of showing individual highlights even when they are in a sequential string. You can try to implement this if you want instead of showing a callout with a note to let the reader know that they highlights should be on individual characters.
One of my favorite things about htop are some of the projects that have been created that are modeled after htop but focus on information other than system resources.
Cointop [0] is one of these projects that comes to mind.
intel_gpu_top helped me solve a mysterious performance issue on a MacBook after countless hours of fruitless investigation. Overheating and throttling was an issue but even after I fixed it the system would lag hard - instantly when I used the external 4k display, and after a while on the internal 1440p screen. Turns out cool-retro-term was maxing out the integrated Intel GPU which caused the entire system to stutter and lag.
Unfortunately both the MBP and my current XPS 15 are unable to drive cool-retro-term on a 4k display with the CPU integrated graphics, and they both overheat and throttle if I use the nvidia graphics card :/
Laptops have very poor cooling. I have a Clevo laptop with a great processor but it will sometimes throttle itself to cool down. Great for small bursts of activity such as compilation but I don't understand how they could market these laptops as gaming machines. Running ffmpeg stabilizes the temperature at a healthy 96 degrees.
Modern powerful hardware has a hard time emulating a glass of water with good fidelity. Reproducing physical effects like ghosting is often harder than it looks.
There's a lot of different usages that may not heat the GPU as much. Also Windows might have better thermal management in the drivers.
CPU wise, Intel defines their TDP as the average heat dissipation, but the CPU can boost higher than this. But from what I understand they tell manufactures to design to the TDP.
To me a browser for this seems like overkill, but I can understand the argument that "everyone already has a browser open", even if I don't think that it leads to good places.
Honestly it seems like one of the worst parts, that the dates are limited by UNIX timestamps and aren't flexible enough to store the dates from centuries ago.
Apparently it's technically possible in Git as of a year or two ago, with some conversion bugs probably still lurking, but Github/Gitlab still don't support it. And the frontend tools like "git commit" don't support it. The project in this answer is using git hash-object to create the commit from raw bytes. https://stackoverflow.com/questions/21787872/is-it-possible-...
As a non-designer, I am finding it easier and easier to get something professional looking up. Couple this with Material-UI and a solid looking template and you could have a beautiful, custom landing page up a day.
There are obviously UX and other issues that I struggle to excel at, but from a simple, consumer-facing perspective, I see UI getting more and more manageable.
> As a non-designer, I am finding it easier and easier to get something professional looking up
The reason you find it getting easier is not to be found in any of these tools, but in your repeated effort to "put something up" and getting better at it.
That's true, but guidance on how to get better at design is much more available than it used to be. I remember creating web sites 20 years ago and getting feedback that they looked dreadful. I had no idea why people didn't like them; I liked them! It turns out I like lots of things that most people don't and I needed guidance to find out what most people like.
I struggle hard with the UX, but I'm amazed that with modern tools I can create a rudimentary, but clean design in Sketch, code it, and manage the infrastructure behind it.
Even 10 years ago, these were 3 to 5 completely distinct skillsets (designer, frontend, backend, infrastructure, DBA).
The divide between these roles you mentioned has gotten bigger over time in my opinion. It's maybe easier nowadays to put together an okay-looking proof-of-concept app, but when getting into large apps and high traffic, the stack has gotten more complicated to do right. You don't just roll out server-rendered MVC app, but need front-end with 3 layouts for different devices and state management more complex than on the back-end. Or maybe that's just my feeling.
I'm an API guy who uses Python and Hy and I'm getting started with some front-end programming on my weekends to have some hobby projects with a public face. It annoys me that every website template now expects you to install via npm. Bulma is all-out with that.
If you dig down, it's apparently because the DSL Sass is used to generate CSS. But there are Sass compilers in C and Python and probably in Prolog at this juncture.
We looked into both when writing an open-source admin interface for NestJS. We went for bootstrap because:
- Nobody likes bootstrap, but everybody knows bootstrap (good for an open-source lib)
- Better browser compatibility
- Better accessibility
- Plenty of UI widgets built on top of bootstrap (we don't use them, but people using the lib could)
- About as big as each other
The Ethereum protocol was designed with PoS in mind and has a built-in difficulty bomb[1] to prove it. In short, this difficulty bomb makes it exponentially harder to mine ETH over time. The goal of this feature was to encourage all participants of the ecosystem to transition to PoS as quickly as possible.
Given that, the implementation has not worked out totally as expected, as the difficulty bomb has been pushed back a few times over the years. However, to answer your question, the reason they did not move faster is because this transition is hard and plays in some uncharted territory.
[1] https://medium.com/fullstacked/the-ice-age-is-coming-ee5ad5f...