...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego

Wednesday, November 21, 2018

Notes on Enterprise consortium blockchain strategy

If you want to create your consortium blockchain technology and platform focusing on the enterprise segment, you should do the following:
- create an easy to use infrastructure template in AWS
- create templates in Microsoft Azure
- create templates in the IBM cloud if possible
- create integration technologies with SAP
- create plugins for the Microsoft products
- create create connectors for every possible ERP products
- get you project into the Hyperledger - Linux foundation incubators
- get your product listed at the Ethereum enterprise alliance

And the reason for that is simple, enterprise procurement will not really change on a short run. Products from big enterprise IT vendors or connection with these companies will be always preferred. 



Tuesday, November 20, 2018

Truffle and solidity tips and tricks - nonce error with Metamask


If you use truffle development environment with Metamask as well you can often get the following error message: "the tx doesn't have the correct nonce". The problem is that you probably use the same accounts both from truffle development console and from the Metamask UI. Unfortunately, Metamask does not automatically update/refresh the nonce if the transaction was executed by the truffle development console. So what you have to do is to reset the Metamask account: Settings - Reset Account. 

Monday, November 19, 2018

Architecting for Byzantine fault tolerance

Designing computer architectures of the future will be surely extended by some new features, namely byzantine fault tolerance and trust model. As fault tolerance is usually an aspect to investigate, future systems can be designed for Byzantine fault tolerance, meaning that even if parts of the system is hacked, the system deliver correct results. One aspect that needs to be taken into account is the RAFT theorem which implies that in case of network partition the system has to choose between availability and consistency. Another important design choice is the trust model. At analyzing the trust model each component of the system has to be investigated in terms if a service of the system works only if we trust in the given component. In this sense we can distinguish between trusted, trustless or semi-trusted services or components. 

Sunday, November 18, 2018

Notes on multi block algorithms and protocols


The research of the decentralisation is focusing pretty much at the moment for the scalability of different protocols and platforms. Bases on the current research directions, there might as well efficient blockchain protocols in a couple of years. So, we might as well investigate the possibilities of creating algorithms and protocols that can not e executed in one block or one transaction but can only be realized by several actions, crossing several blocks. Actually, layer 2 protocols, like lightning network or raiden are going a little bit in this direction. Multi block protocols can provide many services that are not imaginable with current one block architectures. 

How to create native external oracle with Nakamoto consensus





Similarly to the previous blog desisgning a decentralized native external oracle can be realized in a same way as a native random oracle. The principle is basically, the same: on the one hand the miners or validators measure the external data sources and put them into blocks or at least temporal blocks. On the other hand, imported data should be kept in secret, because otherwise new miners could influence or even hack the algorithm itself. 

The full multi-block native external oracle algorithm can be described as follows:

1. An imitator having a {Priv, Pub} private and public key pairs creates a external oracle request that includes the pubic key as well:

request_extern (Pub, N, ext)

, where Pub is the public key of the requestor, ext is the external data source and  N is the number of round during the external data source has to be measured.    

2. At mining or validation, a miner or validator measures the value of the external data source that is encrypted by the public key of the reqestor and put at in the request itself. So after the first validation, the request will look like:

request_extern (Val1)(Pub, N-1, ext) 

where Val1 is the measured external data encrypted by the public key. To ensure security Val1 value should be put into the blockchain as well, not necessarily forever but at least during the N phase of execution of the native external oracle. So with other words, the request would be splited into two parts:

request_extern (Val1) 

will be written into the blockchain

request_extern (Pub, N-1,ext)

can be available as a new transaction request that is propagated throughout the network in transaction pools, waiting to be mined or validated. Similarly, after k<N rounds, there would be k encrypted external values in the blockchain:

request_extern (Val1)
request_extern (Val2)
...
request_extern (Valk) 

and a new request as a transaction which is 

request_extern (Pub, N-k, ext)

3. After N rounds, the requestor can aggregate the external values and decrypt them with the Priv private key. 

Ext1 = Decrypt_Priv(Val1)
Ext2 = Decrypt_Priv(Val2)
...
ExtN = Decrypt_Priv(ValN)

The individually external values should be aggregated in a way that the they provide a correct average value even if some of the nodes try to hack or game the system. The algorithm should contain an incentive mechanism as well, giving a reward to nodes that gave correct values in this way motivating nodes to produce correct data, providing a Schelling point as decision making algorithm. Supposing that around one third of the nodes and measurements can be fault, we can have the following algorithm:

a. filter 33% of the most extreme values
b. make an average of the rest that remained providing the real external value
c. reward the nodes whose values were not filtered out based on the distance of the average.

The algorithm has unfortunately some performance issues: It takes N rounds to find out a real decentralized external value. This can be pretty long and pretty costly depending on the given blockchain platform. Considering a Nakamoto consensus the algorithm is pretty difficult to speed up, as the basic idea is that the external values are comming from N individual different sources that actually means at a Nakamoto consensus N different blocks. This implies the fact as well that the data source should keep the value for a long-enough time, like preserving the same value for hours. The algorithms can not really be used with fast-changing data sources.   

A further question can arise how the data exactly used like within a smart contract. As private keys should not be stored in the blockchain, there should be an additional round with the wallet software to encrypt and aggregate the information which might introduce elements of unnecessary centralization and difficult to build in directly into a decentralized smart contract. It is important to note however that this private key is not necessarily the same as the private key of the account so it can be actually revealed as soon as all the N nodes created a guess for a random number. As a consequence a second transaction can be used to calculate the real random number, like:

evaluate_extern (Priv)

At this moment the private key can be published to the blockchain and the evaluation algorithm can be implemented with a smart-contract in a decentralized way.

From a practical point of view a smart contract with built-in native external oracle would look as:

function with_ext (input params) return (output params) {
   ...
   variable ´= External(<external_datasource>);
  ...
}

for evaluating such a smart contract, there should be two transaction used:

- the first one would call the function with a public key of the external oracle making the initialization in the external data measurement.

transaction init with_ext pub_key_ext

- the second transaction would publish the private key and make the whole evaluation and the execution of the rest of the business logic, like:

transaction exec with_ext priv_key_ext

The proposed system has unfortunately one denial of service attack possibility. The one who initiated the external oracle has the private key and can calculate the result previously. If this result does not make benefit for him, he choose not to reveal the private key or not to execute the second transaction.  

Saturday, November 17, 2018

How to create a native random oracle with Nakamoto consensus


Creating a real native random oracle is one of the holy grail of the blockchain industry. As the problem is not so difficult if the consensus mechanism is a quorum as the nodes participating in the consensus making decisions independently from each other, it is more difficult at a Nakamoto consensus. The problem is with the Nakamoto consensus that the temporal leader creating the next block is practically a "dictator" of the next block and can influence like the random number. The algorithm can be however improved here as well with two ideas:
- creating a real random number is taking several round where several node guesses a random number which are then aggregated at the end. Certainly this is not necessarily a real solution as active leaders might see the previous random values and might influence the next influence in way that is profitable to the nodes. To avoid such a situations we could use the following idea:
- the random numbers are encrypted by a public key of a requestor. As a consequence the next node do not really see the previous values of the previous blocks, so it can not influence the final result.

The full multi-block native random oracle algorithm can be described as follows:

1. An imitator having a {Priv, Pub} private and public key pairs creates a random oracle request that includes the pubic key as well:

request_random (Pub, N)

, where Pub is the public key of the requestor and N is the number of round during the random number has to be generated.    

2. At mining or validation, a miner or validator creates a native random number which is encrypted by the public key of the reqestor and put at in the request itself. So after the first validation, the request will look like:

request_random (Val1)(Pub, N-1) 

where Val1 is the generated random number encrypted by the public key. To ensure security Val1 value should be put into the blockchain as well, not necessarily forever but at least during the N phase of execution of the random oracle. So with other words, the request would be splited into two parts:

request_random (Val1) 

will be written into the blockchain

request_random (Pub, N-1)

can be available as a new transaction request that is propagated throughout the network in transaction pools, waiting to be mined or validated. Similarly, after k<N rounds, there would be k encrypted random values in the blockchain:

request_random (Val1)
request_random (Val2)
...
request_random (Valk) 

and a new request as a transaction which is 

request_random (Pub, N-k)

3. After N rounds, the requestor can aggregate the random values and decrypt them with the Priv private key. 

Rand1 = Decrypt_Priv(Val1)
Rand2 = Decrypt_Priv(Val2)
...
RandN = Decrypt_Priv(ValN)

The individually generated random numbers should be aggregated in a way that the randomness is preserved even if some of the values are not really randomly generated. The exact algorithm here is questionable, but it is important that the original entropy of the requested random number is maintained even if some of the nodes are cheating. Ideas might be:

sha256(Rand1, Rand2, .... RandN)
sha256(Rand1 xor Rand2 xor ... RandN)
sha256( ... sha256(sha256(Rand1))... )
  
The algorithm has two drawbacks that can be fine-tuned on a long run: 
- The algorithm takes N rounds to find out a real decentralized random number. This can be pretty long and pretty costly depending on the given blockchain platform. Considering a Nakamoto consensus the algorithm is pretty difficult to speed up, as the basic idea is that random numbers are comming from N individual different sources that actually means at a Nakamoto consensus N different blocks. 
- Based on the evaluation algorithm we should assume that some of the miners or validators created a good random number. Consider a good byzatnine evaluation function at the end even if some of the nodes cheat the resulting random number can be a good one, like cryptographically secure. The problem is however that even the hones nodes are not really incentivized to create good random numbers, hence we can not really measure or punish if nodes produce good random numbers. It can be certainly an assumption that certain number of the nodes are honest, but actually it would be much better to measure and validate this fact. 

A further question can arise how the data exactly used like within a smart contract. As private keys should not be stored in the blockchain, there should be an additional round with the wallet software to encrypt and aggregate the information which might introduce elements of unnecessary centralization and difficult to build in directly into a decentralized smart contract. It is important to note however that this private key is not necessarily the same as the private key of the account so it can be actually revealed as soon as all the N nodes created a guess for a random number. As a consequence a second transaction can be used to calculate the real random number, like:

evaluate_random (Priv)

At this moment the private key can be published to the blockchain and the evaluation algorithm can be implemented with a smart-contract in a decentralized way.

From a practical point of view a smart contract with built-in native random oracle would look as:

function with_rand (input params) return (output params) {
   ...
   variable ´= Rand();
  ...
}

for evaluating such a smart contract, there should be two transaction used:

- the first one would call the function with a public key of the random oracle making the initialization in the secret random numbers.

transaction init with_rand pub_key_rand

- the second transaction would publish the private key and make the whole evaluation and the execution of the rest of the business logic, like:

transaction exec with_rand priv_key_rand

The proposed system has unfortunately one denial of service attack possibility. The one who initiated the random oracle has the private key and can calculate the result previously. If this result does not make benefit for him, he choose not to reveal the private key or not to execute the second transaction.  

Blockchain forking and CAP theorem


According to the CAP theorem, every distributed system can have one property from the three: consistency, availability and partition tolerance. As most of the systems a vulnerable for partition tolerance the question is usually if they prefer consistency over availability or vica versa. Public blockchains prefer simply availability, if there is a network partition or a disagreement in the network, the blockchain splits or forks to two different partitions having two different knowledge about the world. Other systems, like Hashgraph try to solve the forking problem with different  other mechanisms, however there is probably no miracles, if the blockchain can not fork in case of a network separation it will stop working. Simply put they prefer consistency over availability. 

Thursday, November 15, 2018

Solidity and Truffle Tips and Tricks - converting string to byte32


If you want to convert strings in solidity to byte32 and you get different kind of error messages at  explicit or implicit converting or at changing between memory and storage variables, you can use the following function:


function stringToBytes32(string memory source) 
                                        returns (bytes32 result) {
    bytes memory tempEmptyStringTest = bytes(source);
    if (tempEmptyStringTest.length == 0) {
        return 0x0;
    }

    assembly {
        result := mload(add(source, 32))
    }
}

Monday, November 12, 2018

Notes on DevOps, agile development and maintenance cost


Surprisingly, techniques like DevOps and Agile did not actually make the software industry easier or user-friendlier. The facts of automated and regular software deliveries made certainly possible to adapt the software more frequently on the user requirements, however they made the maintenance and operation of the software more difficult. Simply put running a software that has daily delivery is not easy, but the biggest problem is that most of the software components do not run individually, but with the help of docent or hundreds further software and software modules together. If you consider that each of these software can be released on a daily basis and usually documentation is the last priority of these systems, it surely results an enormous cost in maintenance, if it is possible at all. 

On solution might be the appearance of the AI based software configuration and maitainance systems. I mean from a pure theoretical point of view, there might be the idea of making our software systems simpler, but to be realistic, that is not going to happen. 

Fabric composer tips and tricks - Cyclic ACL Rule detected, rule condition is invoking the same rule


If one of your asset or participant is not updated and you get the following error message like in the javascript console if you the use fabric composer online playground:"Fabric composer tips and tricks - Cyclic ACL Rule detected, rule condition is invoking the same rule" . The problem can be that you do not use the statement await before an update and two updates might happen parallel.