...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego

Wednesday, November 28, 2018

Configure Oraclize with an Ethereum consortium or private network


Supposing you want to use Oraclize with a consortium Ethereum blockchain network, like in the simplest case with your Truffle development environment , you have to do the following steps: 

1. Install Ethereum Bridge to one of the node on your Ethereum consortium network: https://github.com/oraclize/ethereum-bridge

git clone https://github.com/oraclize/ethereum-bridge.git
cd ethereum-bridge
npm install

2. Start your Ethereum Bridge, with the network url and an account, like for Truffle:

./ethereum-bridge -H localhost:9545 -a 0 --dev

3. At deploying with truffle, copy the OraclizeAPI_05.sol locally. 

4. You can practically use the same code as on a live network, in the Ethereum Bridge window, you can check the sent and received transactions as well. 

A very simple implementation can be found under EthereumBridge in the following github repo



Notes on AI and system complexitiy




Our IT systems, including software and hardware components are getting so complex that we need AI support for monitoring, maintenance, operation or even for development. 

However that AI support will not make our systems simpler and it does not help people to understand that systems better either. So on a long rung it will result in IT systems that can not be operated based on purely biological intelligence anymore, only with the help of AI, or purely with AI.

Architecting Blockchain platforms communicating with external data source


Integrating an external data-source with a blockchain solution is usually not an easy task. The major problem is that smart-contract systems can not directly call an external data source, because if different peers at evaluating the external data source see different pieces of data, they can not come to a consensus. So external data integration requires certainly solving some technological challenges. However even at considering the use-case and the general architecture, there are some questions that can be raised:
1.  Decentralization model of the blockchain: depending on the use-case, systems can be built up to totally public and consortium blockchains as well.
2. Trust model of the oracle: in certain use-case we might as well say that there is one trusted data-source, custodian oracle, that we trust. There might be the case however that we want to integrate data from multiply data-sources in a way that no single data-source is trusted. Such a system can be implemented with the help of a game theoretical approach, usually Schelling point, providing a fully decentralized oracle algorithm. Such systems are realized for example by the prediction markets, ike Augur or Gnossis. 
3. Trust of the communication medium: the communication medium is usually the internet that is pretty much untrusted, meaning that there is a need for both encrypting the data and preventing tampering, with like message authentication or authenticity proofs. There might be the case however, that we trust in the communication medium. As an example, if the oracle is an IoT source that is hosted by the same cloud provider as our consortium blockchain, we might as well can trust the communication.   


Double spending and replay attacks in different distributed ledger systems



Double spending, avoiding double spending and avoiding replay attacks work differently in the different blockchain and distributed ledger systems, especially if we consider the ledger structure. 

- Bitcoin: in Bitcoin, there are only unspent transaction outputs that behave as coins. At proof of work, the winner miner defines an order of the transactions that are applied to the ledger, to the unspent coins. The rule is that every unspent output can be spent only once, so considering the transaction order of the winning miner, the first valid transaction spending an unspent transaction output will really spend it, the next such a transaction will be considered as double spending. Similarly, old transactions can not be replayed, because the output is already spent. 

- Corda: in some sense Corda uses a similar UTXO, unspent transaction based system as Bitcoin. However, the unspent outputs are not "coins", but complex state information of a contract. Similarly to Bitcoin, one output can be spent only once, realizing an efficient way of avoiding both double spending and replay attacks. As opposed to Bitcoin, there is no proof of work or mining, instead a special dedicated node, called the notary service is responsible for the ordering of the transactions. 

- Ethereum: Ethereum is not an UTXO but an account based system. Practically, every account has some kind of a value field with a nonce that is incremented at every transaction. So a transaction not simply referring to an account but to an  account with a certain nonce. At proof of work or proof of stake, the winning miner or validator creates a block that contain an ordering of the transactions. If there are two transactions referring to the same account with the same nonce, than the first will be executed and the second one will be recognized as double spending. 

- Hyperledger Fabric: Fabric has a little bit similar mechanism than Ethereum. Each transaction is simulated by the smart-contracts at the endorsement peers and read write sets are defined. A read operator is not only defined to a variable but to a version of a variable. As a consequence, two transactions referring to the same input variable in a way that one reads and one writes that variable can be executed only in one specific order. If in one round a transaction already wrote into a variable the version of that variable will be increased, so in the same round, the variable can not be the read input of another transaction. In Fabric, the is no proof of work, but a specific service, called the ordering service is responsible for creating a valid order of the transactions. 


Sunday, November 25, 2018

How to trigger Ethereum smart contract if Solidity event occurs


Well you cannot react to an event in a solidity smart contract, there is no command at all in EVM that checks the event log. Besides, there is no way of starting a smart contract automatically. However, you can use a semi-trusted setup with Oraclize, where similarly to my previous blog, you can periodically check if an event has occurred.

Firstly, you can use an Ethereum explorer API to get information of an event of a contract with the following http get request.

https://api-ropsten.etherscan.io/api?module=logs&action=getLogs
   &fromBlock=<from>
   &toBlock=latest
   &address=<contract address>
   &topic0=<event id>&
   apikey=<YourApiKeyToken>

So, what you have to do is to query this log periodically with Oraclize and check if the necessary information is present.

  // schedule update schedules the next call
 function scheduleUpdate() payable {
   if (oraclize_getPrice("URL") > this.balance) {
       LogNewOraclizeQuery("Not enough fund");
   } else {
       // NEXT UPDATE IS SCHEDULED IN 60 MIN
      oraclize_query(60, "URL", " url explorer API for the event ");
   }
 }

 function __callback(bytes32 myid, string result) {
    // SECURITY CHECK
    if (msg.sender != oraclize_cbAddress()) revert();
        
    // PROCESS INFORMATION
    if (information_not_present){
    // SCHEDULE NEXT UPDATE
       scheduleUpdate();
    }
 }

Certainly this is not the cheapest and most efficient way of triggering a smart contract for an event, but it might work on a small scale. Experimental implementation can be found under EventTriggeredEthereumContract in the following GitHub repo

Update an Ethereum smart contract periodically with Oraclize



Ethereum smart contracts are normally not able to start automatically on a timely basis, there should be an external service that calls the contract periodically. You can however use Oraclize as an external service for such a use-case. Although Oraclize is an external service that one has to trust, but it works in a relative secure way providing different kind of security guarantees for the execution. Example code can be realized by the following two functions in a smart contract that is inherited from a version of the OrcalizeAPI, like from usingOraclize:

    function __callback(bytes32 myid, string result) {
// SECURITY CHECK
        if (msg.sender != oraclize_cbAddress()) revert();

// SCHEDULE NEXT UPDATE
        scheduleUpdate();
    }

    function scheduleUpdate() payable {
        if (oraclize_getPrice("URL") > this.balance) {
            LogNewOraclizeQuery("Not enough fund");
        } else {
    // NEXT UPDATE IS SCHEDULED IN 60 MIN
            oraclize_query(60, "URL", " .. test url ..");
        }
    }

Experimental implementation can be found under my github repo. Certainly, this is not necessarily the cheapest way for the operation, depending on the exact business logic operational cost might be extreme high. As a simple example, considering a simple code time of writing, the one simple transaction cost of incrementing a state variable with the cost of the service is around  0.01 ether, 

Saturday, November 24, 2018

Truffle and solidity tips and tricks - test events in unit tests


If you use truffle and want to test solidity with javascript unit test, you might want to test the raised events as well. As result gives back the whole transaction including the event log, the easiest way to get the event is to get them directly from the event log:

   return ContractInstance.functionCall({from: accounts[0]});       
   }).then(function(result) {

- result contains the whole transactions
result.logs contains the whole log
- result.logs[0].event contains the first event

So, an assert that checks if an event was raised might simply look like:

assert.equal(result.logs[0].event, "EventName", "Event raised");



Notes on Bitcoin Script and data flow programming

It is usually not well-known but bitcoin has a built in language called the bitcoin script which is a non-turing complete language to implement custom logic and payment control flow. As examples, locking and unlocking scripts are implemented by the bitcoin script language. The language itself is executed sequential with the help of a stack, simply consuming the exiting state and combining the unlocking script which is the input with the locking script that can be regarded as a business logic itself behind the computation. 

Ethereum extended the basic idea of bitcoin to a general programmable framework extending the internal script language to a Turing complete virtual machine. However, we can imagine the opposite extension possibility as well. Bitcoin script can be imagined as a set of if-then statements or a set of rules where the execution should not necessarily be defined in a sequential way. The working mechanism of script can be defined parallel as well giving the possibility to several parallel but not necessary Turing complete computational paradigms. Such paradigms might be for example:
- any data-flow programming language or environment
- simple knowledge base of it then rules. 
- and-or networks of direct electrical or digital circuits
- computation models of quantum computing, like quantum circuits
- artificial neural networks or similar learning structures
- biological inspired computation models, like low entropy model specification or NeuroML 

Friday, November 23, 2018

Truffle and solidity tips and tricks - error at contract deployment gas amount


If you deploy a contract with the help of truffle possibly on a development environment and you get the following error message: 

"Error encountered, bailing. Network state unknown. Review successful transactions manually 
Error: The contract code couldn't be stored, please check your gas amount."

Well, there might be the problem of the real gas amount at the deployment, but most typically you want to deploy a contract that has an abstract method, or it inherits coincidentally and abstract method that is not implemented. 

The raise of non custodian payment networks


As the world speaks about the blockchain and distributed ledger technologies a brand new field is being appeared that solves value transfer with a P2P network but either totally without a global ledger or involves a global ledger only as a part of the algorithm. Some of these technologies are called as Layer 2 solutions, but probably better to name them as non custodian payment and state networks and include the following platforms: 
- Lightning network
- Raiden
- microRaiden
- Liquidity network 
- Interledger Protocol
- Hyperledger Quilt
- Actually the R3 Corda platform has some similar possibilities as well


Wednesday, November 21, 2018

Notes on Enterprise consortium blockchain strategy

If you want to create your consortium blockchain technology and platform focusing on the enterprise segment, you should do the following:
- create an easy to use infrastructure template in AWS
- create templates in Microsoft Azure
- create templates in the IBM cloud if possible
- create integration technologies with SAP
- create plugins for the Microsoft products
- create create connectors for every possible ERP products
- get you project into the Hyperledger - Linux foundation incubators
- get your product listed at the Ethereum enterprise alliance

And the reason for that is simple, enterprise procurement will not really change on a short run. Products from big enterprise IT vendors or connection with these companies will be always preferred. 



Tuesday, November 20, 2018

Truffle and solidity tips and tricks - nonce error with Metamask


If you use truffle development environment with Metamask as well you can often get the following error message: "the tx doesn't have the correct nonce". The problem is that you probably use the same accounts both from truffle development console and from the Metamask UI. Unfortunately, Metamask does not automatically update/refresh the nonce if the transaction was executed by the truffle development console. So what you have to do is to reset the Metamask account: Settings - Reset Account. 

Monday, November 19, 2018

Architecting for Byzantine fault tolerance

Designing computer architectures of the future will be surely extended by some new features, namely byzantine fault tolerance and trust model. As fault tolerance is usually an aspect to investigate, future systems can be designed for Byzantine fault tolerance, meaning that even if parts of the system is hacked, the system deliver correct results. One aspect that needs to be taken into account is the RAFT theorem which implies that in case of network partition the system has to choose between availability and consistency. Another important design choice is the trust model. At analyzing the trust model each component of the system has to be investigated in terms if a service of the system works only if we trust in the given component. In this sense we can distinguish between trusted, trustless or semi-trusted services or components. 

Sunday, November 18, 2018

Notes on multi block algorithms and protocols


The research of the decentralisation is focusing pretty much at the moment for the scalability of different protocols and platforms. Bases on the current research directions, there might as well efficient blockchain protocols in a couple of years. So, we might as well investigate the possibilities of creating algorithms and protocols that can not e executed in one block or one transaction but can only be realized by several actions, crossing several blocks. Actually, layer 2 protocols, like lightning network or raiden are going a little bit in this direction. Multi block protocols can provide many services that are not imaginable with current one block architectures. 

How to create native external oracle with Nakamoto consensus





Similarly to the previous blog desisgning a decentralized native external oracle can be realized in a same way as a native random oracle. The principle is basically, the same: on the one hand the miners or validators measure the external data sources and put them into blocks or at least temporal blocks. On the other hand, imported data should be kept in secret, because otherwise new miners could influence or even hack the algorithm itself. 

The full multi-block native external oracle algorithm can be described as follows:

1. An imitator having a {Priv, Pub} private and public key pairs creates a external oracle request that includes the pubic key as well:

request_extern (Pub, N, ext)

, where Pub is the public key of the requestor, ext is the external data source and  N is the number of round during the external data source has to be measured.    

2. At mining or validation, a miner or validator measures the value of the external data source that is encrypted by the public key of the reqestor and put at in the request itself. So after the first validation, the request will look like:

request_extern (Val1)(Pub, N-1, ext) 

where Val1 is the measured external data encrypted by the public key. To ensure security Val1 value should be put into the blockchain as well, not necessarily forever but at least during the N phase of execution of the native external oracle. So with other words, the request would be splited into two parts:

request_extern (Val1) 

will be written into the blockchain

request_extern (Pub, N-1,ext)

can be available as a new transaction request that is propagated throughout the network in transaction pools, waiting to be mined or validated. Similarly, after k<N rounds, there would be k encrypted external values in the blockchain:

request_extern (Val1)
request_extern (Val2)
...
request_extern (Valk) 

and a new request as a transaction which is 

request_extern (Pub, N-k, ext)

3. After N rounds, the requestor can aggregate the external values and decrypt them with the Priv private key. 

Ext1 = Decrypt_Priv(Val1)
Ext2 = Decrypt_Priv(Val2)
...
ExtN = Decrypt_Priv(ValN)

The individually external values should be aggregated in a way that the they provide a correct average value even if some of the nodes try to hack or game the system. The algorithm should contain an incentive mechanism as well, giving a reward to nodes that gave correct values in this way motivating nodes to produce correct data, providing a Schelling point as decision making algorithm. Supposing that around one third of the nodes and measurements can be fault, we can have the following algorithm:

a. filter 33% of the most extreme values
b. make an average of the rest that remained providing the real external value
c. reward the nodes whose values were not filtered out based on the distance of the average.

The algorithm has unfortunately some performance issues: It takes N rounds to find out a real decentralized external value. This can be pretty long and pretty costly depending on the given blockchain platform. Considering a Nakamoto consensus the algorithm is pretty difficult to speed up, as the basic idea is that the external values are comming from N individual different sources that actually means at a Nakamoto consensus N different blocks. This implies the fact as well that the data source should keep the value for a long-enough time, like preserving the same value for hours. The algorithms can not really be used with fast-changing data sources.   

A further question can arise how the data exactly used like within a smart contract. As private keys should not be stored in the blockchain, there should be an additional round with the wallet software to encrypt and aggregate the information which might introduce elements of unnecessary centralization and difficult to build in directly into a decentralized smart contract. It is important to note however that this private key is not necessarily the same as the private key of the account so it can be actually revealed as soon as all the N nodes created a guess for a random number. As a consequence a second transaction can be used to calculate the real random number, like:

evaluate_extern (Priv)

At this moment the private key can be published to the blockchain and the evaluation algorithm can be implemented with a smart-contract in a decentralized way.

From a practical point of view a smart contract with built-in native external oracle would look as:

function with_ext (input params) return (output params) {
   ...
   variable ´= External(<external_datasource>);
  ...
}

for evaluating such a smart contract, there should be two transaction used:

- the first one would call the function with a public key of the external oracle making the initialization in the external data measurement.

transaction init with_ext pub_key_ext

- the second transaction would publish the private key and make the whole evaluation and the execution of the rest of the business logic, like:

transaction exec with_ext priv_key_ext

The proposed system has unfortunately one denial of service attack possibility. The one who initiated the external oracle has the private key and can calculate the result previously. If this result does not make benefit for him, he choose not to reveal the private key or not to execute the second transaction.  

Saturday, November 17, 2018

How to create a native random oracle with Nakamoto consensus


Creating a real native random oracle is one of the holy grail of the blockchain industry. As the problem is not so difficult if the consensus mechanism is a quorum as the nodes participating in the consensus making decisions independently from each other, it is more difficult at a Nakamoto consensus. The problem is with the Nakamoto consensus that the temporal leader creating the next block is practically a "dictator" of the next block and can influence like the random number. The algorithm can be however improved here as well with two ideas:
- creating a real random number is taking several round where several node guesses a random number which are then aggregated at the end. Certainly this is not necessarily a real solution as active leaders might see the previous random values and might influence the next influence in way that is profitable to the nodes. To avoid such a situations we could use the following idea:
- the random numbers are encrypted by a public key of a requestor. As a consequence the next node do not really see the previous values of the previous blocks, so it can not influence the final result.

The full multi-block native random oracle algorithm can be described as follows:

1. An imitator having a {Priv, Pub} private and public key pairs creates a random oracle request that includes the pubic key as well:

request_random (Pub, N)

, where Pub is the public key of the requestor and N is the number of round during the random number has to be generated.    

2. At mining or validation, a miner or validator creates a native random number which is encrypted by the public key of the reqestor and put at in the request itself. So after the first validation, the request will look like:

request_random (Val1)(Pub, N-1) 

where Val1 is the generated random number encrypted by the public key. To ensure security Val1 value should be put into the blockchain as well, not necessarily forever but at least during the N phase of execution of the random oracle. So with other words, the request would be splited into two parts:

request_random (Val1) 

will be written into the blockchain

request_random (Pub, N-1)

can be available as a new transaction request that is propagated throughout the network in transaction pools, waiting to be mined or validated. Similarly, after k<N rounds, there would be k encrypted random values in the blockchain:

request_random (Val1)
request_random (Val2)
...
request_random (Valk) 

and a new request as a transaction which is 

request_random (Pub, N-k)

3. After N rounds, the requestor can aggregate the random values and decrypt them with the Priv private key. 

Rand1 = Decrypt_Priv(Val1)
Rand2 = Decrypt_Priv(Val2)
...
RandN = Decrypt_Priv(ValN)

The individually generated random numbers should be aggregated in a way that the randomness is preserved even if some of the values are not really randomly generated. The exact algorithm here is questionable, but it is important that the original entropy of the requested random number is maintained even if some of the nodes are cheating. Ideas might be:

sha256(Rand1, Rand2, .... RandN)
sha256(Rand1 xor Rand2 xor ... RandN)
sha256( ... sha256(sha256(Rand1))... )
  
The algorithm has two drawbacks that can be fine-tuned on a long run: 
- The algorithm takes N rounds to find out a real decentralized random number. This can be pretty long and pretty costly depending on the given blockchain platform. Considering a Nakamoto consensus the algorithm is pretty difficult to speed up, as the basic idea is that random numbers are comming from N individual different sources that actually means at a Nakamoto consensus N different blocks. 
- Based on the evaluation algorithm we should assume that some of the miners or validators created a good random number. Consider a good byzatnine evaluation function at the end even if some of the nodes cheat the resulting random number can be a good one, like cryptographically secure. The problem is however that even the hones nodes are not really incentivized to create good random numbers, hence we can not really measure or punish if nodes produce good random numbers. It can be certainly an assumption that certain number of the nodes are honest, but actually it would be much better to measure and validate this fact. 

A further question can arise how the data exactly used like within a smart contract. As private keys should not be stored in the blockchain, there should be an additional round with the wallet software to encrypt and aggregate the information which might introduce elements of unnecessary centralization and difficult to build in directly into a decentralized smart contract. It is important to note however that this private key is not necessarily the same as the private key of the account so it can be actually revealed as soon as all the N nodes created a guess for a random number. As a consequence a second transaction can be used to calculate the real random number, like:

evaluate_random (Priv)

At this moment the private key can be published to the blockchain and the evaluation algorithm can be implemented with a smart-contract in a decentralized way.

From a practical point of view a smart contract with built-in native random oracle would look as:

function with_rand (input params) return (output params) {
   ...
   variable ´= Rand();
  ...
}

for evaluating such a smart contract, there should be two transaction used:

- the first one would call the function with a public key of the random oracle making the initialization in the secret random numbers.

transaction init with_rand pub_key_rand

- the second transaction would publish the private key and make the whole evaluation and the execution of the rest of the business logic, like:

transaction exec with_rand priv_key_rand

The proposed system has unfortunately one denial of service attack possibility. The one who initiated the random oracle has the private key and can calculate the result previously. If this result does not make benefit for him, he choose not to reveal the private key or not to execute the second transaction.  

Blockchain forking and CAP theorem


According to the CAP theorem, every distributed system can have one property from the three: consistency, availability and partition tolerance. As most of the systems a vulnerable for partition tolerance the question is usually if they prefer consistency over availability or vica versa. Public blockchains prefer simply availability, if there is a network partition or a disagreement in the network, the blockchain splits or forks to two different partitions having two different knowledge about the world. Other systems, like Hashgraph try to solve the forking problem with different  other mechanisms, however there is probably no miracles, if the blockchain can not fork in case of a network separation it will stop working. Simply put they prefer consistency over availability. 

Thursday, November 15, 2018

Solidity and Truffle Tips and Tricks - converting string to byte32


If you want to convert strings in solidity to byte32 and you get different kind of error messages at  explicit or implicit converting or at changing between memory and storage variables, you can use the following function:


function stringToBytes32(string memory source) 
                                        returns (bytes32 result) {
    bytes memory tempEmptyStringTest = bytes(source);
    if (tempEmptyStringTest.length == 0) {
        return 0x0;
    }

    assembly {
        result := mload(add(source, 32))
    }
}

Monday, November 12, 2018

Notes on DevOps, agile development and maintenance cost


Surprisingly, techniques like DevOps and Agile did not actually make the software industry easier or user-friendlier. The facts of automated and regular software deliveries made certainly possible to adapt the software more frequently on the user requirements, however they made the maintenance and operation of the software more difficult. Simply put running a software that has daily delivery is not easy, but the biggest problem is that most of the software components do not run individually, but with the help of docent or hundreds further software and software modules together. If you consider that each of these software can be released on a daily basis and usually documentation is the last priority of these systems, it surely results an enormous cost in maintenance, if it is possible at all. 

On solution might be the appearance of the AI based software configuration and maitainance systems. I mean from a pure theoretical point of view, there might be the idea of making our software systems simpler, but to be realistic, that is not going to happen. 

Fabric composer tips and tricks - Cyclic ACL Rule detected, rule condition is invoking the same rule


If one of your asset or participant is not updated and you get the following error message like in the javascript console if you the use fabric composer online playground:"Fabric composer tips and tricks - Cyclic ACL Rule detected, rule condition is invoking the same rule" . The problem can be that you do not use the statement await before an update and two updates might happen parallel.  

Fabric composer tips and tricks - deleting asset from array


If you want to delete an element from an asset or participant that as a reference array to another asset or participant, the process is pretty similar to deleting an element from a javascript array:

1. Getting a registry for the asset or participant
const assetReg = await getAssetRegistry(namespace + '.Asset'); 

2. Getting an index of the asset or participant to be deleted
var index = asset.arrayOfReferences.indexOf(assetToBeDeleted);

3. Delete the index in a javascript style
  if (index > -1) {
     asset.arrayOfReferences.splice(index, 1);
  }

4. Update the asset
assetReg.update(asset);

Fabric composer tips and tricks - not updating without error message



Working with fabric composer, you can sometimes get the phenomena that some item is not being updated, however there is no error message or anything, the transaction is executed without any problems, only something is not updated. This can be caused by exchanging the getAssetRegistry with the getParticipantRegistry statement. If you experience such a phenomena, check if you try to update assets with the getAssetRegistry and update participants with the getParticipantRegistry statements.   

Tuesday, November 6, 2018

Solidity as a consortium blockchain programming language



There are many initiatives of using solidity as a blockchain programming language not just in Ethereum but in many other mostly consortium blockchain solutions. On the one hand, this is a logical direction, as most of the developers who has blockchain programming experience have the experience with solidity - Ethereum. On the other hand, most of the existing blockchain applications are realized with the help of Solidity so they might be migrated this way to a consortium platform without any modification. 

Despite the direction is perhaps not absolutely optimal. On the one hand, Solidity was one of the pure Blockchain oriented programming language and certainly it is good for a first initiative but it is perhaps not so optimal on a long run. It has several "child illnesses" that should be repaired on a long run, like the chaotic type system or payment functions. On the other hand, programming on Ethereum with Solidity supposes indirectly very strong constraints in terms of computation and gas. If the same use case is put on the top of a consortium blockchain, where gas is practically free of charge, the application should be implemented probably totally differently, even if the programming language is the same.