...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego

Saturday, September 29, 2018

Representing supply and production chains on the blockchain


At representing supply chains and prodiuction chains on the blockchain, we have one big advantage: these chains can be integrated with each other in a pretty natural way. Examples might be from the business perspective:
- Integrating two dependent supply chains, like supply chain for automobiles that is preceded by the supply chain of the metal industry. The two chains can be built up individually and integrated later on.
- Refining a supply chain with production chains. As a first step, creating a supply chain might model production elements only on a higher level that can be refined with the help of a detailed production chain model. 

From a technical point of view the integration might be different as well:
- Integrating supply or production chains on blockchain is pretty simple. 
- Integrating supply or production chains among several blockchain technologies might be supported by atomic swap or interledger protocols. 
- last but not least, supply chains can be integrated with a more complicated intermediary like with the help of a market for different resources.   

Friday, September 28, 2018

Fabric composer tips and tricks - revert transaction


Reverting a transaction is pretty easy in Hyperledger Fabric Composer, you just have to throw an error, like:

throw new Error('Error message');

If an error is thrown, the whole transaction is rolled back. However, surprisingly, the catch logic can be used to stop the exception propagation. If the exception is not propagated to the top, the transaction is not reverted.   

Fabric composer tips and tricks - auto increment id of an asset


Classical problem is at Hyperledger Fabric composer that one want to auto increment the id of a newly created asset or participant. On way of reaching it that we query at the creation process all the number of the already existing assets and we simple increase the id by one. The following code demonstrates an example code:

    const assetReg = await getAssetRegistry(namespace + '.Asset');   
    let existingAssets = await assetReg.getAll();
  
    let numberOfAssets = 0;
    await existingAssets .forEach(function (asset) {
      numberOfAssets ++;
    });

    let newAssetId =  numberOfAssets +1;

Certainly, the construct does not protect from "double spending": if two assets will be created in the same round, both might get the same id, so one will be discarded as invalid. One way can be to overcome the difficulties is to use beside the incremented asset id another input parameter of the asset, like the asset name. 



Nakamoto consensus, quorum and parallel processing


There is a fundamental difference between the two major consensus algorithms: Nakamoto consensus and quorum. Since in Nakamoto consensus always on node "wins" that creates the next block, the whole process is pretty much sequential. It means that the winning block will be a "dictator" for the block to create implying a quasi sequential processing. With that structure a lot of services and algorithms are not too easy to realize, such decentralized random oracle, decentralized external oracle, or decentralized exchange. On the contrary, in a quorum consensus some of the nodes create the next state together, implying a quasi parallel working which provides the real possibility to realize the previously mentioned service and algorithms.

Tuesday, September 25, 2018

The role of blocks in a blockchain protocol



In a standard blockchain protocol, mining or minting a block has two different purposes. On the one hand, there must be a common set of ordered transactions that are acceptable to the whole network and avoids double spending. Avoiding double spending is actually equivalent with the fact that there a common order of transactions that is the same to the whole network. On the other hand, the transactions must be applied to the state. As these two steps are actually happens in the same round, this is not necessary a must. Application can be imagined that applying transactions to the state happens only a couple of steps later, as the whole network perceives the same ordered steps of transactions. In certain special circumstances applying to the state might be non-deterministic as well, containing some built-in optimization possibilities. 

Sunday, September 23, 2018

On chain governance with Hashgraph


Creating a native on-chain governance on Hashgraph is similar to creating any kind of an on-chain governance system.  For the first run there must be some special internal governance variables that might influence consensus parameters, node configuration or any other details of the core algorithm. Such variables should have two properties:
1. They can not be changed absolute freely only after a kind of a community voting. 
2. They must be anchored somehow with the nodes or with the core algorithm. 

For modifying on-chain properties there must be a special transaction, instead of a change value of a blockchain field transaction, there should be:

TR: I vote for changing the value of a governance variable - signed by the private key of an account that can actually vote for such a change. Such accounts might be associated with nodes, or with consortium members of the Hedera Hashgraph. 

After the voting transactions are validated and ordered and the voting period might be at the end, each node can apply the result to the state individually. A possible algorithm might be:
- count the votes for a given change, if it is more than 50% apply to the change to the state. 
  


Creating a Byzantine fault tolerant decentralized exchange with Hashgraph


The ordering algorithm of the Hashgraph architecture provides a way to realize a real decentralized exchange among different internal cryptoassets. Let we imagine that we have several native cryptoassets in a account/state based system. What we have to do is to introduce special transactions that represent buy and sell orders. As an example a buy order might look like:

Tr_order: buy 100 cryptoassets at 10 - signed by a private key of an account that has the balance of buying.

Similarly, a sell order might look like:

Tr_order: sell 50 cryptoassets at 11 - signed by the private key of an account that gets the cryptoasset at the end. 

The order transactions are validated and ordered with the help of the gossip of the gossip protocol and after a couple of rounds every node will see exactly the same order of the order transactions. This list of order transactions is actually the order book of the decentralized exchange. 

At applying the transactions to the state, we have to do two things: 
- there must be a kind of order matching algorithms on the order book, with other words on the ordered order transactions. Examples might be like FIFO or Pure Pro Rata. A list of possible order matching algorithms can be found for example under the following link.
- the orders that are matched has to be applied to the state. It practically means increasing and decreasing certain balances, similarly a simple transaction is applied to the state. In this case however there might be more than two accounts that has to be modified in the same time.

As the above mentioned algorithm is deterministic and the nodes see exactly the same order of order transaction, the nodes can calculate the algorithm independently from each other and getting the same result. As the core algorithm is Byzantine fault tolerant, the exchange will probably have the same property.

The decentralized exchange can be further improved in a way that instead of hard-coding the order matching algorithm, realizing an on-chain governance process to change order-matching on the fly. 

For experimental implementation take a look the following github repository.

Creating native decentralized oracle with Hashgraph


Hashgraph provides the possibility to realize a fully decentralized Byzantine native oracle service. Let we imagine that we want to call an external data source in a smart contract function, like

function () {
 state = external_oracle("http://external_datasource/%22)
}

At event propagation each {N1, N2, ... Nm} node has to call the http://external_datasource/ independently from each other producing a {V1, V2 ... Vm} possibly different values as results for the calls. These values are propagated, voted and ordered by the events together with the transactions. 
As soon as both the transactions and the values are communicated ordered and voted (so practically at applying to the state), we can use the following function to calculate the result of oracle: 

external_oracle calculation:
step1: filter from {V1, V2, .... Vm} the extreme big or small values, like 33% of all of the values
step2: do an average of the non-filtered values
step3: the result of the function is the average

Since at the time of the calculation each node sees exactly the same set of {V2, V2, ...Vm} values and the external_oracler calculation is deterministic, at each state there will be exactly the same value calculated even if they are calculated independently from each other. 

Since the communication architecture (Hashgraph) is Byzantine fault tolerant, probably the Oracle itself will be probably Byzantine fault tolerant as well (might require some more considerations)

The algorithm can be cryptoeconomically further secured, like giving a bigger transaction fee for nodes producing values that are less far for the end result of the external_oracle calculation function (providing a Schelling point from a game theoretical point of view).

Demo implementation can found in the following github repository.

Hashgraph, transaction censoring and transaction delaying


The Hashgraph algorithm provides from a mathematical point of view fair ordering and access. However it is important to point out that these mathematical proofs working on an event level, that might be a little bit different if we consider things on a transaction level or from a real architecure point of view.  

One attack might be to censor a newly created transaction on a node level, before the transaction even really reaches the network, before it will be put into a node. The censorship can be realized by directly hacking the Hedera software, but it might be realized indirectly as well, like censoring the transactions at the operation system or at the network level.  For such a censorship attack, there might be several resolution:
- users might use only trusted nodes. However in an untrusted or semitrusted architecture this is not a lucky algorithm. 
- users might broadcast a transaction to a node and monitor the system. If the transaction was not propagated, the transaction can be broadcasted again, or to another node, or several other nodes as well. It can work in situations where delaying or timing of the transactions can not cause problems. 
- to be 100% sure that the transaction is propagated as fast as possible, users has to broadcast the transaction to 33% +1 nodes. As the theoretical limit of a Byzantine fault tolerant architecture 33% in such a situation at least one node should work honestly and propagate the transaction right away.

Similar attack can be realized as a newly created transaction is propagated to the network by a node but not with the next possible event, but with a later one, cause indirectly a delay in the transaction. It might cause serious economical damage with certain timing relevant applications, like online gambling and games. To be sure that no delaying can occur with a newly created transaction, the most secure way is again to propagate the transaction to 33%+1 nodes in the system. In such situation, if one honest node propagates the transaction to the network as soon as it gets, than the transaction will be part of the hashgraph right away, independently if other faulty or hacked nodes delegate the transaction only with a delay.   

Fair ordering of transactions in Hashgraph


The Hashgraph virtual voting algorithm provides a fair ordering of the events in the following sense:
- Fair access: means that no individual can stop a transaction from entering the system,
or even delay it very much.
- Fair timestamps: A consensus timestamp that reflects when the majority of the network
members received that event. This consensus timestamp is “fair”, because it is not possible for a
malicious node to corrupt it and make it differ by very much from that time.
- Fair transaction order: A common order of the events that is to be seen exactly the same way after virtual voting by each node and that can not be influenced by individual nodes, or the minority of the nodes. 

There is one problem however, the mathematical side of the algorithms and the proofs guarantee a fair ordering of the events and not the transactions them self. Certainly, if each event contains at most one transaction, than fair ordering of the events equals with the fair ordering of the transactions. If however on event can contain multiply transactions, the situation is not so simple. A node creating multiply transaction for an event can game the order of the transactions in a certain event and even between multiply events. 

To avoid unfair ordering of transactions in an event an improvement might be to define an order and a timestamp of the transactions inside a node. On such an order might be to consider the hashes of the transactions in an event in a increasing order, like:

sha256(Tr_1), sha256(Tr_2), ... sha256(Tr_k) in an increasing order, where {Tr_1, Tr_2, ... Tr_k} is the set of transactions associated to an event. 

It does not really matter how an individual node orders the transactions in an event physically, the events are always regarded as the ordering of the transaction hashes. As a consequence it can not be gamed by the individual nodes unless transactions or signatures for transactions are forged that is usually pretty difficult because of the cryptographical algorithms.

Similarly a global order of the transactions can be defined as well:
- the global order of the events is provided by the Hashgraph algorithms.
- the order of the transactions in an event is provides by the ordering of the hashes. 

Based on the global order and the Hashgraph algorithm for attaching a timestamp for each event, the transactions can be associated by a timestamp as well:
- if an event contains only one transaction, the transaction timestamp equals with the timestamp of event. 
- if an event contains several transactions, the transactions must be timestamped in the event. A possible algorithm might be the following:
Consider an Ei event that contains the {Tr_1,Tr_2, .... Tr_j} transactions that are ordered as {sha256(Tr_1), sha256(Tr_2) ... sha256(Tr_j)}. The time period between Ei and Ei-1 can be evenly distributed among the transactions in a way that if a transaction gets a higher ordering, it gets a newer timestamp. 

Last but not least, a fair access can be only be reached if transaction are communicated to several nodes, like one 33% +1 of the exiting nodes.  
  


How to create real decentralized random oracle with Hashgraph



Real random oracle is one of unsolved Holy Grail of decentralized networks. Hashgraph technology promises perhaps a real possibility to solve it. A pseudo algorithm should be the following:

Let we assume that we have a smart contract function that assigns to a state a new value, like

function() {
 state = new_state
}

we basically do not call this function but create a transaction, like

TR: call function() signed by private_key of some account owner.

This transaction is propagated on the network by the gossip of gossip protocol, probably checked by every node, goes through a virtual voting and ordered with other transactions into a common order. Only after that, it is applied to the state (independently if it is an account based system or an UTXO based).

so let assume a function that calls a real native decentralized random oracle, like the following function:

function() {
 state = rand()
}

and a transaction that calls this function.

As soon as a node "sees" the transaction of function that contains a random oracle call, the node call its internal native random function and creates a ri random value, which is further propagated on the network similarly as the transactions. As a consequence there will be a set of {r1, r2, .... rN} random values propagated on the network, one for each node.

As a next step, we wait until there is a consensus and an order of these random values, so every node will see exactly the same order of these individual random values, like r4, r3, r1, r2 ... rN. And at this point we give a value to the random function as rand() = merkle_root(r4, r3, r1, r2 ... rN). As merkle_root is a deterministic function and and each node sees exactly the same order of individual random numbers, even if rand() is calculated independently at each node it will produce the same result.  

Certainly, the real quality depends on the individual random numbers as well. However as Hashgraph is Byzantine fault tolerant, if more than two third of the nodes provide a relative good random number meaning either cryptographically secure or pseudorandom, the result will have probably the same attribute, meaning cryptographically secure or  pseudorandom.

Experimental implementation can be found in the following github repo.

Monday, September 10, 2018

Attaching legal prose to a smart contract



Some smart contract application requires as a common requirements to attach a legal prose to the document that can be validated even by notary or legal services. From a technical point of view, the legal prose is a document that is not stored directly in the blockchain, only the hash of the document. Storage possibility might be different decentralized file systems, like IPFS or Swarm. The legal document must be in a well-formed format that does not contain visual elements, like an XML format similarly to CEN Metalex might work. The document should derive some of the elements from the smart-contract. One implementation might be that the contract document itself is just a template that needs parameters to be found only in the smart contract itself. Another option is to copy these parameters at each update to the contract itself and update the hash as well. 

A further challenge arises at the fact of the immutability of the ledger. Even legal contracts should probably have something as a retention policy, meaning that after a certain time the information might needed to be deleted from the blockchain, which has some further challenges at the given blockchain technologies. 

Last but not least, privacy of such a solution is needed to be carefully designed, that might issue some more unexpected challenges. 

Saturday, September 1, 2018

How to implement a Blockchain from scratch - block and blockchain objects


Building up a blockhcain archtiecture, two of the most important roles are certainly the blocks and the blockchain. At the implementation at least three important scenarios have to be taken into account:
- Mining: at mining, the miner creates a brand new block, adds the transactions to the block from the transaction pool and solves some cryptographic puzzle. As the block is created correctly it will be added to the local blockchain and will be communicated on the network with the help of a gossip protocol. 
- Synchronization: at synchronization, the blocks are queried from the network one by one, they are validated and added to the chain. At synchronization, there is not necessarily a forking strategy, the blocks can be queried based on the block id which can be provided as on consistent set of the chain. Similarly, the other side of the synchronization is to provide a set of transaction id-s which is regarded as the longest consistent blockchain. 
- Gossiping blocks: If the node receives a block on the network, first of all the network should already by synchronized. if we receive a new block, it must be validated and a fork resolution algorithms has to be executed as well. If the block extends the longest chain, the block should be added to the longest chain. If a block extends one of the alternative chains, it can be added to the alternative chain and it must be decided if we have a new chain longest chain. There might be the case, that the block can not be added to the chain at all, if so it can be added to a pool of orphaned blocks. Last but not least, there might be the situation as the block can not be added to the chain alone, but only with the help of another block that is already in the pool of orphaned blocks.