We are currently working on the transition to Proof-of-Stake with an upgrade which implements Albatross to speed up transaction times and overall scalability of the network. Moving to PoS is also about overcoming the environmental burden caused by the electricity consumption of PoW. In this episode, Nimiq developers discuss the current status and main challenges of this upgrade.
Download this episode to listen offline and on the go
In this episode we discuss:
Hello welcome to easy crypto with Nimiq. Nimiq makes crypto easy to use for everybody and we have this series to explain more about it I am your host Richy and this time we are going to be talking about Nimiq 2.0 and the next steps here I am with Philipp a technical lead of the team and also with Sebastian Dietel also known as Basti a blockchain developer of the team so guys welcome and thank you for joining me.
Thank you for having us
Thanks, so what is Nimiq 2.0 for people that are just like getting to know about this
You want to go ahead?
Sure, Nimiq 2.0 is our new version of our blockchain protocol and the primary thing that it introduces is it switches the protocol from a proof of work protocol to a proof of stake protocol so we're introducing staking and that has a bunch of advantages so first of all proof of stake is much more energy efficient than proof of work is so the blockchain protocol is going to use a lot less energy be much more green and environmentally friendly energy efficient and uh so that's one of the big advantages of switching to proof of stake generally uh proof of stake with is not only more energy efficient but also a lot more efficient overall so we can get better throughput better tps better performance of the chain by moving from a proof of work to a proof-of-stake model but also staking is going to allow users of the chain to you know use their coins in order to secure the networks take them for a validator who then can use the stake to to vote on blocks and to secure the network process transactions and stakers are going to be able to earn rewards by by participating in this staking mechanism so those I think the other primary advantages and the primary new features that we're going to be introducing with the 2.0 chain
Yeah to add to that uh just a little bit, like one of the interesting parts about being able to stake for validator is that formally in proof of work you actually had to run a miner yourself to earn these rewards which obviously you need the hardware for and you need to like make sure and maintain this stuff whereas in proof-of-stake you no longer have to you can basically just delegate your stake to a validator and he then in turn is going to do all this work for you so that is really making earning rewards and producing blocks accessible to pretty much everybody and that is also a very nice thing to have in this new protocol
And in this case I think also it's important to note that because of the unique properties that the Nimiq blockchain has, namely like browser nodes for example and super fast transactions then it is it is not a copy paste of another blockchain right everything is made from scratch except that we reuse or we utilize some libraries from the common blockchain space like lip2p so we are building like our own stuff here.
Yeah this is this is definitely an original blockchain that is not a fork or anything like that so we are building this basically from the ground up using obviously some of the best practices to not reinvent the wheel like everywhere but the protocol itself is an original protocol designed by ourselves
Okay and where are we at right now how would you explain like the right now yeah the progress towards 2.0 where are we at
Maybe if you also motivate just the previous point just a little bit i think the also one of the reasons why we are actually pursuing our own implementation and not just reusing an existing one is that one of the primary or one of the key objectives that we have is to allow wallet apps and clients and users to directly interact with the network right so we want to have a protocol that you can access without re having an intermediary that allows you uh you know access to the underlying network like it is the case with uh you know many uh blockchains that have a, you know, distributed and decentralized design underneath. It is still very difficult for client apps to actually connect to the network directly and become like a you know a first class node of that of that network and that is one of the one of the motivations why we are why we are doing our own protocol in order to enable this particular aspect.
Something like maybe most or maybe all cryptocurrencies are decentralized all the way to the server node we are decentralized all the way to the wallet.
Yes I think that's a pretty big distinguishing aspect that sets our protocol apart from others of course others also care about light clients and wallets but I think we are really taking it all the way here also with the browser nodes which are another example of this particular direction that we are pursuing there.
Okay that being said where are we at right now like what are the main challenges right now that we are working on with Nimiq 2.0.
Okay maybe let me throw some in there and then Basti can add some more so we have a full implementation of the protocol. But we are still making it more robust that's basically kind of where we are at So everything is implemented the the chain runs it works you know it can process transactions our validators work the network is also reasonably fast so a lot faster orders of magnitude faster than our original implementation and the main challenges that we face right now is making it so robust and so stable that it will basically no matter how faulty or malicious certain network participants behave the network of the protocol is always is always able to continue producing a chain and can recover from pretty much any state and any behavior that is sort of covered by our security assumption. So in contrast to proof of work right where every miner is basically his own uh you know his his own participant that can produce blocks and and uh process transactions just on his own you know without having to interact with other parties on the network except to get the transactions here in our proof of stake model we have a lot more collaboration between the key entities that secure the network and that you know process transactions namely the validators and therefore it is much harder to get this right to a point where when validators come and go when validators behave faulty or maliciously we can still continue with our protocol and continue producing blocks even though this is a very collaborative yet distributed process that we are going through here right so it's a whole different story than coming from the proof of world proof of work world.
Yeah definitely so currently I would say that basically the ongoing process that we are currently doing is have our implementation run and perform and then to basically find out when it breaks and why it breaks and then to fix it and basically make it as stable as you can absolutely make it to withstand pretty much any scenario being thrown at it and that usually involves digging through like lots and lots and lots of gigantic locks to try and figure out what is actually happening and then also obviously that comes also with the idea of like improving tooling and these kinds of things to basically enable us to find these issues more quickly and then also ultimately to resolve them more quickly and there's also like being a lot of work being done in that regard to enable us to basically create these things quicker than before and I think we are actually on a pretty good way uh to like achieving these things and that is yeah, I'm fairly confident.
And what are the biggest challenges so far like what things kept you at night more or gave you like the biggest headache
Well I think there are multiple challenges right so i think generally one sort of area that we are looking at is of course uh is of course chain performance right that's you know with coming from a chain that can do pretty much like i don't know 12 transactions per second with a block time of one minute you know things are nice and easy and slow and uh you have a bunch of time you have a bunch of time for everything you know you're not in a rush a block a minute that's easy and chill but now we are more talking about 10 blocks per second right? I mean depending on what the load of the network is and that definitely introduces new challenges such that you know we can make sure that nodes can actually you know keep up with that extremely fast moving chain and even if they you know disconnect for a while and miss some of the of the stuff that was happening on the chain that they can properly you know catch up to the state again also, another challenge that comes from this is by if we are so fast then our chain state can also grow pretty quickly right so if we can process uh you know thousands of transactions per second and we actually do that then our database of stuff that has happened uh basically grows grows very quickly much quicker than it is used to grow right so that also comes with some additional challenges and some issues that we will that we need to address uh there and that we need to solve there I think that's basically sort of one area but we are I think already doing uh we are already in a pretty good state of that regard so performance is not top of the list right now
I think another challenge something I already hinted at earlier or that we ended already at earlier is the fact that block production or you know process transaction processing is now moved from being a process that can run that can be run by one miner individually on his machine in his node that is now a distributed process that involves communication with a lot of other entities and it's it's basically a distributed computation and reasoning about this distributed computation if you know this is happening across i don't know 20 or more nodes and a lot of computations happen in a very short time that makes analyzing what is actually happening under the hood more challenging also because there's so much data to sift through and that we need to post-process in a way such that we can understand and and reason about the block production processes and why some things get stuck or don't work as quickly as we want them to be or behave you know the way that they behave under certain conditions. So that is definitely also another challenge that we are looking at here the distributed nature of the system combined with the extreme high throughput and the large amount of data that every run produces that's definitely also an a lot different than you know a block a minute and everybody does that on their own.
Yeah, the collaborative process sort of to produce blocks also like introduces a wide variety of network messages that previously didn't exist which now need to first be delivered in time to specific nodes and that also like creates some synchronization issues between those nodes in particular when thinking about like 10 blocks a second that is actually fairly low and the network delay uh is also adds to that these these things also need to be delivered in time and that definitely is also a little bit of a challenge so yeah the it does change the landscape of how this protocol works uh quite drastically i would say and that obviously and introduces some things to uh to be taken care of.
And in terms of our we're talking a little bit about browser notes and like all the way decentralization kind of to the wallet and lowering the barrier for people to connect directly uh what are the things or like is this what are the differentiators in terms of like design of our chain for example no smart contracts to make things simpler also the decentralized all the way maybe those things I think when it comes to design of the chain itself changes right.
So generally we need to strike the right balance between performance and decentralization right because ultimately at least in a sort of monolithic blockchain design where every node validates everything and naturally the the more throughput you have the more transactions you're processing per time unit the higher the load on each individual node right so for example, we can see with solana that you know they process a tremendously high amount of transactions per second but at the same time they require very strong machines very strong very high computational resources in order to actually participate in the network; special hardware, insane amounts of ram in actually in order to be able to run such a validator so that naturally leads to more centralization just simply because the barrier of entry to become a validator is much higher you need to invest a lot more into hardware you cannot just run this thing off your of your home computer pretty much right so uh that's one area where we are just trying to you know strike the right balance here between having reasonable throughput but still having hardware requirements that are low enough to allow ordinary users to also run the nodes on the network. Then another aspect that that we need to take into account is that if we want to have a very good if we want to have light clients that can directly access the network we need to be able to provide them with a very succinct proof of what the current state of the network is such that they can, you know, validate the state of the network their own balance transactions that have happened themselves without having to rely on another gateway or another intermediary that that helps them with that.
And also without knowing the entire chain because they don't
Exactly, without knowing the entire chain or downloading the entire channel because that's that's obviously infeasible for a light client right. That also then comes with some implications for the design we need mechanisms to uh to succinctly prove the the chain state the fact that we don't have smart contracts i think it's more of a was more of a conscious design decision that we that we made very early on in the project right because our focus was on you know doing payments and doing payments in a decentralized way as easy as possible and this sort of "as easy as possible approach" you know making just a simple you know having an approach that values simplicity in the smart contract direction general purpose of blockchain it did not really fit well with that with that goal. So that's why right now there are no general purpose smart contracts uh supported on the Nimiq blockchain at this time. However, we also do have smart contracts on the chain right they are just hard coded and not general purpose cannot be developed by third parties and i think in the future, we are probably going to have a lot more of those hard-coded smart contracts to have more possibilities more options of things that i can actually do with it with the chain on chain but at this point in time there are no plans to ever turn them into a general purpose a blockchain like Ethereum with with programmable smart contracts simply be because that is not the focus uh that that we have in mind when we want to do easy payments.
Yeah it makes more sense to maybe let them do that thing, let them do their thing and then be like interoperable in some way like HTLCs we have right now could help maybe some connection some way in the future and but it's not our our focus and in terms of like next steps uh talking just like 2.0 like what are the next like challenges what are you like now going back to work like now after we finish this podcast you go back and you're like you have to face this bug, this thing.
You want to have a like very concrete example for that okay? So currently uh for some reason our validator gets stuck producing a certain block type not all the time just like rarely very rarely actually but once it's enough to stall the chain so absolute need needs absolute fixing and that has to do something with the communication between validators that at some point can turn into a faulty set of communication and that is what i'm currently dealing with and trying to fix i uh it's it's actually not that easy to find and it's actually quite uh quite time intense to like look through the locks and find these like specific bits of communication that i'm looking for to identify what actually goes wrong here but this is just like one example out of some which basically always allude to the stability stuff to make the chain perform regardless of circumstances and these are this is like the example that i'm working on today but yesterday there was another one uh sort of similar to that so enough work for now.
Okay so that's uh yeah very very good example basically validators going away which then leads to some identifiers of validators changing and then we cannot apparently communicate with those validators anymore because the identity has changed and something goes wrong with the mapping there right and yeah other issues that we are looking at is some network related to things so for example after we restart validators sometimes re-establishing network connections to other uh entities or other validates in a network has issues which then down the line leads to certain requests not being delivered correctly which can then prohibit a node from actually syncing up with the network and then falling behind and not being able to participate in block production anymore so that's another thing that we are looking at but also we are as Basti mentioned earlier also you know improving the tooling improving the the the workflows that we have in order to identify and resolve these issues so i mentioned earlier that you know as a single run of the DevNet that has you know 20 nodes and produces a million blocks that can easily produce gigabytes of data and in order to efficiently assess the conditions that we see in the DevNet that we want to want to follow up on we are now setting up better tools to process this data not having to scroll through uh uh log files that are 100 megabytes large but to instead have nice visualizations visualizations of these uh of these runs so that we can basically add a glance sort of see what is going on and that this is also going to be something that we will later on use to actually monitor the live system and to identify uh problems that we that we might that might occur in the live system so generally setting up better monitoring better tooling to uh gain better insights into what is happening in the network.
And the gigabytes of data is basically gigabytes of log data only so obviously a million blockchain uh produces quite some data on its own already like just in database size but that is not what what Philip was alluding here to it's it's the log data that is actually quite large and very very hard to dig through if you're looking for something very specific
Well guys really complex but really interesting work as well I know everybody is super excited about the realization of this work so keep up the good work and thanks a lot for joining us
Thanks for having us
Yeah thanks Richy
Take care, bye and thank you
None of the statements must be viewed as an endorsement or recommendation for Nimiq, any cryptocurrency, or investment product. Neither the information, nor any opinion contained herein constitutes a solicitation or offer by the creators or participants to buy or sell any securities or other financial instruments or provide any investment advice or service. All statements contained in statements made in Nimiq’s web pages, blogs, social media, press releases, or in any place accessible by the public, and oral statements that may be made by Nimiq or project associates that are not statements of historical fact, constitute “forward-looking statements”. These forward-looking statements involve known and unknown risks, uncertainties, and other factors that may cause the actual future results, performance, or achievements to be materially different from any future results, performance, or achievements expected, expressed, or implied by such forward-looking statements.