...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego
Showing posts with label consensus. Show all posts
Showing posts with label consensus. Show all posts

Monday, October 15, 2018

Identity is a scarce resource in consortium blockchain


To avoid a naive sibyl attack in most of the decentralized consensus protocols there is a scarce resources that needs to be use in order to take part in the consensus mechanism. This resource is computational power at proof of work and it is a kind of a cryptocurrency at proof of stake. Actually, consortium systems do not differ very much from this idea. In consortium systems, the scarce resource is the identity: only nodes with a special distributed identity are able to participate in the consensus. In this sense it works similarly as the public blockchain networks : the scarce guarantees that none with a huge bot network, but without the scarce resource can influence the consensus voting. 

Friday, September 28, 2018

Nakamoto consensus, quorum and parallel processing


There is a fundamental difference between the two major consensus algorithms: Nakamoto consensus and quorum. Since in Nakamoto consensus always on node "wins" that creates the next block, the whole process is pretty much sequential. It means that the winning block will be a "dictator" for the block to create implying a quasi sequential processing. With that structure a lot of services and algorithms are not too easy to realize, such decentralized random oracle, decentralized external oracle, or decentralized exchange. On the contrary, in a quorum consensus some of the nodes create the next state together, implying a quasi parallel working which provides the real possibility to realize the previously mentioned service and algorithms.

Sunday, August 12, 2018

Notes on fork resolving policy

In bitcoin network, there is a general rule, always the longest chain should be accepted as a valid on. However this is just one implementation of a so called fork resolving strategy which might be extended and generalized on a long run. Such extensions might include: 
- instead of just chains, considering the whole weighted structure of the possible forks, like in Ethereum taking also uncle or omar blocks into account.
- in a permissioned chain, blocks might be prioritized and weighted based on the fact who signed them.
- in an attempt to avoid selfish mining, the blocks might be weighted with the creation time.

Further implementation and algorithms might be also possible, it is only important to note that fork resolving strategy as taking always the longest chain is only one possible implementation.  

Saturday, August 4, 2018

Blockchain consensus and scarcity


Blockchain consensus is based always on a scarce resource, like at proof of work this is the computing power, at proof of stake this resource is some kind of a token that has a limited supply and kind of monetary value, at proof of authority or byzantine fault tolerance this resource is kind of a right given by a decentralized authority for voting. The reason for the need of the scarce resource is that the sybil attack, like replicating a node a couple of million times on a cloud environment, can not work because participating on the consensus depends on the scarce resource and not on the number of replicated nodes. 

Saturday, July 28, 2018

Consensus mechanism in a multi-hash Blockchain structure



As we have seen in our previous blogs, a blockchain system can a multi-hash system as well, that can be integrated with a multi-hash proof of work. Such a system can be easily integrated with any other consensus mechanism. As apart from proof of work, there is no nonce or nonce calculation or any similar problems, we can simply calculate the hashes, or depending on the situation one hash, and add the next block to the blockchain, with the given consensus algorithm.  

Proof of work in a multi-hash Blockchain structure


A proof of work can be realized in a multi hash structure blockchain as well as we have already seen in our previous blog as well. Let we imagine the fact, that the blocks in the blockchain are connected not by one but by several hash pointers. These hash pointers might be totally independent from each other, however they might be dependent as well Supposing we want to have proof of work in such a structure, we have several options: 
1. There is one hash pointer with nonce playing the role alone in proof of work
2. Each hash has a nonce, and the proof of work is equable distributed among the nonces
3. Each hash has a nonce, but the proof of work is distributed among the nonces in a weighted way.

Further difference is that the mechanism should work differently independently if there is a reset in the blockchain or not. In case of a reset, one hash pointer will be simply set to zero, as the on, or other ones should run on an increased difficulty. It is easier at an equable distributed difficulty, it might be a little bit more challenging at a weighted distribution.

Thursday, July 26, 2018

Forgettable Blockchain hash structure (multi-hash blockchain)

Blockchain platforms are not really optimal in the sense that actually immutability is not always an option for many different applications. Such an application is for example a GDPR conform identity management which should have the possibility to delete data or modify data in a final way, meaning that old versions not remaining in the blockchain. 
One way of doing it can be simply resting the hash pointer chain time to time and forget the old values. Certainly, the problem is that at reseting the pointer, the whole system will be very much vulnerable for the different kind of attacks. This can be avoided by using two independent hash pointer chains and resting them with a delay, meaning that at resetting a hash pointer p1, the variables still need to be compatible with hash pointer p2, like on the following picture:



Certainly, such a system has less security than a classical blockchain solution and it can be embedded easily into a state based solution, and not so easily into an UTXO based ones. Further consideration is required if both transactions and state variables are stored as information. Certainly, the logic should be applied to the state variables and indirectly to the transactions. The system might be combined with classical Blockchain solutions as well, separating between variables that should be persevered in the blockchain forever from those that should be persevered only for a given time frame. 

Monday, July 23, 2018

Optimizing IOU debts and mining


As we have seen in the previous blog, debt optimization is practically proposing a new directed graph structure in a way that balances of the individual accounts do not change. The easiest way to represent the debt graph is the adjacency matrix, where each A[i,j] element represents the the IOU contract from i to j. Based on that representation, we can formally define the balances of an account as well: 

Balance i = Sum (A[i,j]) - Sum k (A[k,i])

Considering a general mining process, there can be several {IT1, IT2, ... ITN} transactions issuing new IOU-s each transaction is signed by its creator. On top, there is a set of {OT1, OT2, ... OTN} optimization transactions either signed by trusted optimizer nodes, or by nobody. Both sets of transactions are in one two separate transactions pools. The idea of mining is to find a subsets of {IT1, IT2, ... ITK} and {OT1, OT2, ... OTK} transactions in a way that for all account balances, the change is initiated by only by the issuing transactions, meaning that:

Balance i (new) = Balance i (old) + Sum (OT[i,j]) - Sum k (OT[k,i]), where OT is a matrix built up by the {OT1, OT2, ... OTK} transactions. Certainly, the complexity of the network has to be reduced due to the optimization transactions, it is an open question how it can be measured. 

Based on these definitions, there can be a one shot or a two round transaction process: 
- if we imagine two rounds, the first round is a purely optimization round as the second one is a classical transaction round. 
- In a one shot process, both optimization and the new transactions take place. 




  


Thursday, April 12, 2018

On fairness of transaction ordering in blockchains


Proof of work and mining systems are actually pretty far from being fair regarding transaction ordering. On the one hand the miners in validators in a Proof of Stake system work as local dictators on the set of transactions, meaning that they can select which transactions are put into the next block. This gives on the one hand the possibility to censor or delay certain transactions. How big this delay might be depends on the competition and collaboration of the miners and validators. On the other hand, as the transactions are usually processed based on transaction fees and miners certainly priories the transactions with the higher transaction fee. This gives the possibility for an average user to game the system, in the sense that it can be sure that a higher fee transaction will be surely prioritized over a lower transaction fee, giving for instance the possibility for a successful double spending attack. 

It is an open question if a blockchain protocol can designed in a fair ordering way. Other distributed ledger technologies like Hashgraph has the fair ordering property.