...by Daniel Szego
quote
"Simplicity is the ultimate sophistication."
Leonardo da Vinci

Tuesday, July 31, 2018

Solidity Tips and Tricks - running optimization


Let we imagine the situation that we have to implement a smart contract that needs to have some kind of an external optimization after every call of business logic, like optimizing an IOU graph. What we can do is to set a bool variable and two modifiers indicated if our data structure is optimized or not. If the data structure is optimized we can call any kind of a business logic. If it  is not optimized we should be able to call only the OptimizationLogic function. Certainly, access control rights and roles of such a system are pretty much questionable and need some more fine tuning. 

contract Optimization {
    bool optimized;
    
    constructor() {
        optimized = true;
    }
    
    modifier Optimized {
        require(optimized);
        _;
    }

    modifier NotOptimized {
        require(optimized == false);
        _;
    }
    
    function BusinessLogic() Optimized {
        // doing business business logic
        optimized = false;
    }

    function OptimizationLogic() NotOptimized {
        // doing optimization logic
        optimized = true;
    }
}

Simple IOU contract with optimization possibility on Ethereum

Simple systems administrating IUO contracts can be easily realized on the top of Ethereum. Let we say that accounts can issue new IOU-s that are administrated as credit and debit balance. Someone can issue an IOU to any particular account, but nobody can issue an IOU in the name of somebody else. There is a further external role for optimizing the IOU graph, for the first run optimizing only means reducing the difference of credit and debit balance for a certain node. The role can be at the moment anybody and for the activity there is an extra reward token as an incentive mechanism. A simple code is shown in the following example.

contract SimpleIOUWithOptimization {
    
 mapping(address => uint) creditBalances;
 mapping(address => uint) debitBalances;
 mapping(address => uint) rewardBalances;

    
 function issueCredit(address _to, uint _value) {
  creditBalances[msg.sender] += _value;
  creditBalances[_to] += _value;
    }
    
 function optimizeAccount(address _account){
  if((debitBalances[_account] - creditBalances[_account]) >= 0) {
   rewardBalances[msg.sender]+=  debitBalances[_account] -  
     creditBalances[_account];
   creditBalances[_account] = 0;
   debitBalances[_account] -= debitBalances[_account] -  
     creditBalances[_account]; 
  }
  else{
   rewardBalances[msg.sender] +=  creditBalances[_account] - 
     debitBalances[_account];
   debitBalances[_account] = 0;
   creditBalances[_account] -= creditBalances[_account] -
     debitBalances[_account]; 
  }
 }
}

Certainly in this use case, the optimization could be executed right at the credit issunance, however in more complicated scenarios this might be not the option. 

Hash matrix - a blockchain algorithm with multiply retention policy

Hunting for the holy grail of a blockchain solution with multiply retention policy, we might as well consider the extended version of the multi-hash blockchain structure. There are different transactions or different pieces of data with different retention policies hashed together with a double hashing structure where regularly one of the hashes will be reseted realizing practically a forgetting functionality on the chain. On top data from a longer retention data can be hashed together with data from a smaller retention period realizing a kind of hash-matrix that can be seen on the picture bellow:

The structure is pretty much straightforward at the first stage, as the immutable structure has only one hash chains, however the exact implementation will be questionable at later stages. The problem that in the long retention policy structure there are actually two independent hash-chains, which are of course contain the same set of transaction and data but they differ on the hash value. It is a little bit questionable which of these chain should be hashed together with the next one. Some of the possibilities might be:
- Both hash chain is from the long retention policy chain is hashed together with the short retention policy one. This structure however might cause the exponential explosion of the different hash values, as at the case of short retention policy there might be the possibility that we have to manage 4 chains. There might be some simplification of the reset time of the different policies can divide each other and they are well scheduled.
- Only the data or transaction of the previous retention policy chain is copied into the next phase, without the information of the previous hash values. It can however decrease the reliability of the whole structure. 

The difference role of digital signature in UTXO and account based systems


Although the role of cryptogaphy and digital signature is similar in all blockchain systems they behave a little bit differently depending on if we speak about an UTXO or account based system. In an UTXO based system keys represent practically the unspent outputs that can be associated to a public key and can be spent by signing with the private key. Similarly in an account based system each account has an address which is practically a public key. Private key in this scenario simply means that someone has  the access of doing something with the account. like sending money from it, or initiating another transaction like calling a function of a smart contract from that account. Certainly, account based systems are much more vulnerable for reply attacks, meaning that someone copies a correctly signed transaction on the net and tries to broadcast it again into the network. For this reason, account based systems are usually extended by an incremental nonce for the transaction number. 

Account based systems might provide the possibility as well to extend the existing cryptographic scheme with other roles. Let we imagine that the we use multi signature addresses or an account is associated with several different public keys and signing with a certain private key might mean semantically something different than signing with another key. Like one private key is needed to spend money from the account, but another one to initiate a non-money sending transaction. 

Monday, July 30, 2018

On the privacy of consortium Ethereum networks


There are usually two kind of a consortium networks: the one that was designed as a permissioned network and the other one which is a public network, like Ethereum but it is configured in a consortium scenario. Public networks have however pseudonym visibility by default meaning that all of the transactions are visible on the network, however the real identity behind the keys is not known. This architectural feature will not be changed even if you run your network in a consortium scenario. Your blockchain and transactions will be visible as much as your network is visible. With other words, of you manage to run everything on a sealed network, like with VPN-s and firewalls, it means and increased privacy. If you want achieve more privacy, you have to consider some kind of a encryption on the network, like with zk-SNARKs.  

Cryptoeconomical attacks on Blockchain applications


It is actually a weird thing to identify attack surface for a blockchain based system. The major problem is that they are not purely software architectures, but rather complex systems containing both cryptography and software architecture components and elements based on economy. As a consequence "hacking" or "gaming" such a system is usually not purely a simple software engineering task. There can be the following types attacks:
- Classical attacks: like trying to break the cryptography, or exploiting an implementation vulnerabilities.  
- Monetary attacks: these exploit the fact that a token or several tokens are actively traded on a couple of exchanges. As an example, pump-dump scheme or perhaps even shorting against a token or cryptocurrency can be regarded as such an attack. Sometimes such an attack is not clearly monetary, but for example is combined with a negative social media campaign. 
- Certainly, there might be hybrid attacks as well, that try to exploit some system implementation errors combined with an economical "gaming". For such categories a new field of cybersecurity should be probably defined. 

Multi-hash blockchains and peer gossip protocol

In the bitcoin network the peer synchronization protocol works as follows: First the the peers identify which is the highest block on the blockchain by exchanging the biggest block number by getblock(). After that, the blockchain with the highest block number can send the inventory, meaning all of the hashes of the blocks that he has. With the given inventory, the peer can start to download the blocks one by one:

   
Supposing we have a multi-hash blockchain, there are several ways of synchronizing the blocks. In all cases however, at the "inv" call not only the block headers but the hash policy has to be transferred. If it is a simple multi-hash blockchain containing only one pair of hashes, then based on the inventory and retention policies, the node can synchronize from the last but one hash pointer reset.

If it is a hybrid blockchain containing immutable hashes and hashes with limited retention policies, then the situation is a little bit more complicated. From a practical point of view there are multiply parallel coexisting blockchains where the hash headers of the longer retention policy variables can be hashed in the shorter ones, however not in the reverse direction. At the synchronization a block has to be asked either by the shortest retention time block header, or by all possible block headers. Similarly at the information transfer it should be paid attention that expired block information should not be transferred.

Saturday, July 28, 2018

State storage in Multi-hash blockchains

Multi-hash blockchains can be a great way for storing reduced information on the chain and providing solutions for lower security blockchain solutions. There migh be a catch however, state is not necessarily stored 100% percent at each block, there might be other strategies as well. As an example, there might be an option to store only the last couple of states, because based on the transactions and initial states the new states can be calculated or reinitialized. Another option might be to store the state redundant between blocks especially regarding parts that did not change, like on the following picture:

At any case, if the hash pointer is reinitialized, it means indirectly that the old blocks and state information might be deleted without effecting the consistency of the chain. As a consequence, there should not remain any dependency regarding the old blocks or the old chain. It implies that in such situation the state tree should be built up again without external dependencies. As the state is validated by the old state and the transactions, the exact storage of the state information should be irrelevant to the consensus mechanism, it just requires more space at hash pointer resets. 

Replay attack on the Blockchain


Replay attack can be interpreted in two ways on the blockchain:

1. If there is a hard-fork on the blockchain, and the system is spited into two concurrent platforms, there is a possibility to copy one signed transaction from one chain and put this transaction to the other chain. Usually at a hard-fork there is a mechanism that explicitly avoids a replay attack, like there is a modified transaction semantics, or even just one bit on the forked blockchain, so signed transactions of the old chain will not be valid on the forked one.

2. Even without forking there is the possibility to copy an old transaction and try to replay it again on the blockchain. It is not too effective at an UTXO system, because the system will know that the old transaction output has been already spent. However if it is an account/balance based system, further algorithms must be used. One way to avoid replay attack on an account/balance based system is to implement a counter at each variable that has to be increased at each new transaction. Another way can be to create a nonce for each transaction randomly and automatically, the system has to ensure that the same nonce can not be applied two times. There might be some mixed solution as well, where quasi random nonces are used in an incremental fashion, like:

nonce_next = hash(nonce_prev)

In a multi-hash blockchain system we have most likely account/balance based systems, implying that we have to use one of the nonce or counter based solution. That means that the state of the blockchain is actually not the state of all of the balances, but the tuples of balance and nonce

state_i = <balance_i, nonce_i_j >


Blockchain and the storage cost structure


Blockchain is a pretty costly storage mechanism. The reason for that is that everyhing that is somehow a state variable is stored in the blockchain practically forever, including all of the previous versions as well. For this reason systems like Ethereum explicitly discourage storing too much information in the blockchain, like with the gas value of the storage variables (initializing a new storage variable costs 20.000 gas, whilst modifying state variable costs 5.000 gas, so practically, these operators are among the most expensive ones). 

The model could be however fine tuned as not every variable is necessarily needs to be stored forever in the blockchain with every modification forever. There might be situations as we say that the variable can have a lower security, saying if we have consistent values that are stored in the last couple of hundred blocks, that might offer enough security for a given use-case. Actually, if have consistent values for a couple of hundreds of blocks, it is not so easy to hack or fork. Like considering a long range or forking attack with proof of work, it is almost as much impossible to start a long range attack from the genesis block, recalculating practically all of the hashes as it is difficult to start a forking attack from a state that 100 blocks in the past. 

As a result, blockchain models should be more fine-tuned, allowing for the application developers to specify how much security is set to a certain variable. Certainly, lower security state variables might be much cheaper than the high security ones.     

Realizing on-chain blockchain governance


To realize an on-chain governance on the blockchain, you have to have two elements:

1. system variables or system contracts: these are special variables that are stored in the blockchain itself and the consensus algorithm of the nodes are dependent on these variables. As an example, difficulty of the standard bitcoin blockhain can be regarded as such a variable. Optimally even the exact consensus mechanism can be stored in such a variable and the nodes might pick an exact algorithm based the variable on the fly. Examples might be changing from proof of work to proof of stake or byzantine fault tolerance. 

2. A changing mechanism of the variable. There might be different mechanisms for the change, like: 
 - totally centralized: there is an authorized actor with a special secret key who can change the value.
 - community voting: several actors, like prominent community members vote for the change and if the majority or super majority voted for the same value, the value will change. 
 - voting by mining: indirectly miners can initiate changes in the protocol itself as well, this is kind of a voting by work (at proof of work), or voting with stake (at proof of stake). Similar mechanism is implemented at some cryptocurrencies where the blocksize can be configured by the miners.

Certainly, extreme attention has to be payed at realizing such a systems, as at the changing the consensus mechanism there might be a totally new form of attack, which is attacking the on-chain governance change itself, like with forking attacks. For this reason, it is always proposed to implement such a governance rules taking effect longer runs as the standard transaction validation. Similarly, as in Bitcoin, the coinbase transaction taking effect only after 100 blocks.   

Another option might be to realize something as "soft governance" in some cases. As an example, we can say that the block size limit is a governance variable that can be increased if a certain agreement was made among participants. However we can do something as a "soft governance", like the block size limit can be increased by every miner, only it is disincentived. 

On chain governance in multi-hash Blockchain

In case of a multi-hash blockchain system, where a pair of a hash-pointer can have different reset time, there might be the possibility to control this reset time based on a common consensus on chain consensus. At a given consensus system, the hash pointers might be stored as: 

<p1, reset_time_1, remaining_reset_time_1>
<p2, reset_time_2, remaining_reset_time_2>

The reset algorithm can work as:

if (remaining_reset_time_i <= 0) {
  do hash reset;
  remaining_reset_time_i = reset_time_i;
}

The reset time is however stored on the blockchain, so there might be a decentralized voting mechanism, controlled either by the miners or by the community that can change the reset time to a new value. This is however not so simple. The problem is that the reset period of p1 hash pointer has to be in the middle of the hash period of the p2, otherwise the system security can be more easily compromised. One algorithm can be the following:

Supposing we have accepted a new future_reset_time, we can do the following modification at the following reset like in this example for 1, supposing that the resets have 50% time delay to each other:

if (remaining_reset_time_1 <= 0) {
  do hash reset;
  remaining_reset_time_1 = future_reset_time;
  remaining_reset_time_2 = future_reset_time / 2;  
  reset_time_1 = future_reset_time
  reset_time_2 = future_reset_time
}

Certainly, it is an open question how agreement on the new retention time can work in the most efficient way. One idea can be to introduce some special variables in the blockchain that store such a system information and a voting mechanism, like special voting transactions by end-users, or voting by mining power that can make a proposed variable final.  


Data retention policy and multi-hash Blockchain


As we have seen in our previous blog, there is a possibility to build up a multihash blockchain system, where some of the hashes are reseted regularly. Such a structure provides the possibility to define different variables on the blockchain with different retention policies, or with different memories. There might be a system that contains several different variables with several different retention policies, like:
- classical hash pointers without reset going back to the genesis block are the real variables of the system providing real immutability. 
- variables controlled by hash pointers with regularly reset have a retention time between [N/2 - N] to the system, as older values can be practically forgotten without effecting the consistency of the blockchain. 
- there can not only one, but several different level of variables, with several different sets of retention policies, like storing the value for a year, for 3 years, or for forever. 

One question must be still answered: what should happen if the two variables are combined but they have different retention policies. As a general rule, we can say that variables with big retention time can always influence variables with small retention time. Unfortunately, it is not true in the other direction, if a small retention time value influences a big retention time value, the retention policy might be compromised. 

Another idea might be to implement the actual retention time configuration on chain, meaning that a common on.chain consensus might do some reconfiguration on-the-fly, without re-initializing the whole chain. 

Consensus mechanism in a multi-hash Blockchain structure



As we have seen in our previous blogs, a blockchain system can a multi-hash system as well, that can be integrated with a multi-hash proof of work. Such a system can be easily integrated with any other consensus mechanism. As apart from proof of work, there is no nonce or nonce calculation or any similar problems, we can simply calculate the hashes, or depending on the situation one hash, and add the next block to the blockchain, with the given consensus algorithm.  

Proof of work in a multi-hash Blockchain structure


A proof of work can be realized in a multi hash structure blockchain as well as we have already seen in our previous blog as well. Let we imagine the fact, that the blocks in the blockchain are connected not by one but by several hash pointers. These hash pointers might be totally independent from each other, however they might be dependent as well Supposing we want to have proof of work in such a structure, we have several options: 
1. There is one hash pointer with nonce playing the role alone in proof of work
2. Each hash has a nonce, and the proof of work is equable distributed among the nonces
3. Each hash has a nonce, but the proof of work is distributed among the nonces in a weighted way.

Further difference is that the mechanism should work differently independently if there is a reset in the blockchain or not. In case of a reset, one hash pointer will be simply set to zero, as the on, or other ones should run on an increased difficulty. It is easier at an equable distributed difficulty, it might be a little bit more challenging at a weighted distribution.

Friday, July 27, 2018

Memory of a blockchain and multi-hash Blockchain


Memory of a blockchain is simple the number of blocks, pieces of information, transactions or state variables that are hashed together and are represented in the final hash pointers. Regarding the final hash pointer of a standard blockchain solution, it has infinite memory, regarding that the information is hashed back to the genesis block. However this should not always be the case, certainly an infinite memory provides a better security, however after a certain point, it seems to be irrelevant if we keep the history only for a couple of years or to forever. In this sense, it makes sense imagine blockchain platforms that have less than infinite memory. 

Thursday, July 26, 2018

Forgettable Blockchain hash structure (multi-hash blockchain)

Blockchain platforms are not really optimal in the sense that actually immutability is not always an option for many different applications. Such an application is for example a GDPR conform identity management which should have the possibility to delete data or modify data in a final way, meaning that old versions not remaining in the blockchain. 
One way of doing it can be simply resting the hash pointer chain time to time and forget the old values. Certainly, the problem is that at reseting the pointer, the whole system will be very much vulnerable for the different kind of attacks. This can be avoided by using two independent hash pointer chains and resting them with a delay, meaning that at resetting a hash pointer p1, the variables still need to be compatible with hash pointer p2, like on the following picture:



Certainly, such a system has less security than a classical blockchain solution and it can be embedded easily into a state based solution, and not so easily into an UTXO based ones. Further consideration is required if both transactions and state variables are stored as information. Certainly, the logic should be applied to the state variables and indirectly to the transactions. The system might be combined with classical Blockchain solutions as well, separating between variables that should be persevered in the blockchain forever from those that should be persevered only for a given time frame. 

Fabric composer tips and tricks - ACL for admin


If you start to modify or define the access control rules for a business blockchain network, pay attention that you coincidentally should not revoke code changing or code reading access for the networkAdmin role. If you do, it might happen, that you do not have access to your source code anymore. As a consequence, it is practical to start with general rules, that gives access to your admin role for everything, like:

rule NetworkAdminUser {
 description: "Grant business network administrators full access to user resources"
 participant: "org.hyperledger.composer.system.NetworkAdmin"
 operation: ALL
 resource: "**"
 action: ALLOW
}

rule NetworkAdminSystem {
 description: "Grant business network administrators full access to system resources"
  participant: "org.hyperledger.composer.system.NetworkAdmin"
  operation: ALL
  resource: "org.hyperledger.composer.system.**"
  action: ALLOW

It is important to note that your participant is not your networkAdmin, so creating rules for the Participants but deleting for the networkAdmin will have the same effect.

Blockchain and evolutionary algorithms

From a conceptual perspective a blockchain can be regarded as a kind of a evolutionary algorithm, which is driven indirectly by market forces. The genesis block might contain something as a set of initial variables, which we can regard as a the initial population. The population is reproduced in each round with the help of different market factors and human interactions. The variables might have new values, they might recombine different values to new ones, and even brand new variables can appear as well, mimicking a little bit something as recombination or mutation. Certainly, it is questionable if clear algorithms can be imagined for representing something with a genotype or a fenotype. One such an example is the cryptokitties, where actually both mutation and recombination are defined in a clearly specified way, however they might be not the only application that can be regarded as blockchain based market evolution. 

Tuesday, July 24, 2018

Notes on the consortium blockchains and strategic positioning



Honestly, I do not know if consortium blockchain solutions will succeed or blockchain will be rather the field of public permissionless blockchains. But if consortium blockchain is a use-case, big companies will buy them from enterprise IT vendors, like Microsoft, IBM, Oracle, Hyperledger ... 
Nobody will buy a consortium blockchain platform from a noname startup. Not because the technology is not good enough, but simply this is how enterprise IT buying decisions work. This has to be taken into consideration at every development decision that tries create a consortium blockchain solution. Certainly, strategical directions can be
- we are good enough to compete with these corporations (and have enough capital as well), 
- we build up the company to sell not directly to the end customers bot to these enterprise vendors, which might be a little bit tricky if the company was financed by an ICO
- we find a niche market among these enterprise vendors

IOU debt graphs and trust lines on the blockchain


We were briefly brainstormed how IOU debt network can be represented on the blockchain and how it can be embedded into a mining process in the previous two blogs:  debt graph and mining. We can extend the model with trust lines as well, meaning that the IOU network can be optimized and a new IOU can be issued only if there is an existing trust line between two participants. Trust line can be represented again with a matrix T, where

T[i,j] = 0 if participant i does not trust participant j and
T[i,j] = 1 if participant i trusts participant j

It is an open question what further properties shall a trust matrix have. As an example, it might be reflexive and or transitive:

reflexive: if T[i,j] = 1 then T[j,i] =1 as well
transitive: if T[i,j] = 1 and T[j,k] = 1 then T[i,k] = 1

Having a trust matrix means that there is the possibility to issue a credit or to optimize a relationship between i an j if and only if T[i,j] = 1. Certainly, it is pretty much an open question what should happen if a trust line can be deleted as well.   

Creating matrix operators with solidity

Solidity is not really meant to store a large amount of data. However it might happen despite that requirements are to store matrixes in our smart contract. If that happens, one of the possibility is to store them as hash tables realized by mappings:

contract Matrix{
    
    mapping(uint => mapping(uint => uint)) _matrix;
    uint public maxi;
    uint public maxj;

    function elementAt(uint i, uint j) public returns (uint) {
        return _matrix[i][j];
    }
    
    function setElement(uint i, uint j, uint _value) public {
        _matrix[i][j] = _value;
        if (i > maxi) {
            maxi = i;
        }
        if (j > maxj) {
            maxj = j;
        }
    }
}

In this way accessing or modifying matrix operators are easy. However, if we have to create operations on the top of the matrixes like, adding or subtracting them, it might become pretty costly very fast, because we have to iterate on the matrixes with nested loops and even if the matrix is spare, meaning that only a couple of elements are filled out, both the iteration and the modification costs money. 

contract MatrixOperator{
    
    function add(Matrix _matrix1, Matrix _matrix2) public {
        // get max elements
        uint maxi = _matrix1.maxi();
        if (_matrix2.maxi() > _matrix1.maxi()){
            maxi = _matrix2.maxi();
        }

        uint maxj = _matrix1.maxj();
        if (_matrix2.maxj() > _matrix1.maxj()){
            maxj = _matrix2.maxj();
        }

        for (uint i=0; i < maxi; i++) {
            for (uint j=0; j < maxj; j++) {
                if ((_matrix1.elementAt(i,j)>0) && (_matrix2.elementAt(i,j)>0)) {
        _matrix1.setElement(i,j,_matrix1.elementAt(i,j) + _matrix2.elementAt(i,j));
                }
            }
        }    
    }
    
    function sub(Matrix _matrix1, Matrix _matrix2) public {
        // get max elements
        uint maxi = _matrix1.maxi();
        if (_matrix2.maxi() > _matrix1.maxi()){
            maxi = _matrix2.maxi();
        }

        uint maxj = _matrix1.maxj();
        if (_matrix2.maxj() > _matrix1.maxj()){
            maxj = _matrix2.maxj();
        }

        for (uint i=0; i < maxi; i++) {
            for (uint j=0; j < maxj; j++) {
                if ((_matrix1.elementAt(i,j)>0) && (_matrix2.elementAt(i,j)>0)) {
        _matrix1.setElement(i,j,_matrix1.elementAt(i,j) - _matrix2.elementAt(i,j));
                }
            }
        }    
    }
}

In practical scenarios, working with matrixes bigger than 10 is pretty much unpractical. Even with matrixes smaller than 10, you have to calculate with a gas cost of 5000 - 20000 for each modified value. 


Monday, July 23, 2018

Optimizing IOU debts and mining


As we have seen in the previous blog, debt optimization is practically proposing a new directed graph structure in a way that balances of the individual accounts do not change. The easiest way to represent the debt graph is the adjacency matrix, where each A[i,j] element represents the the IOU contract from i to j. Based on that representation, we can formally define the balances of an account as well: 

Balance i = Sum (A[i,j]) - Sum k (A[k,i])

Considering a general mining process, there can be several {IT1, IT2, ... ITN} transactions issuing new IOU-s each transaction is signed by its creator. On top, there is a set of {OT1, OT2, ... OTN} optimization transactions either signed by trusted optimizer nodes, or by nobody. Both sets of transactions are in one two separate transactions pools. The idea of mining is to find a subsets of {IT1, IT2, ... ITK} and {OT1, OT2, ... OTK} transactions in a way that for all account balances, the change is initiated by only by the issuing transactions, meaning that:

Balance i (new) = Balance i (old) + Sum (OT[i,j]) - Sum k (OT[k,i]), where OT is a matrix built up by the {OT1, OT2, ... OTK} transactions. Certainly, the complexity of the network has to be reduced due to the optimization transactions, it is an open question how it can be measured. 

Based on these definitions, there can be a one shot or a two round transaction process: 
- if we imagine two rounds, the first round is a purely optimization round as the second one is a classical transaction round. 
- In a one shot process, both optimization and the new transactions take place. 




  


Optimizing IOU debt graphs on the blockchain


At optimizing an IOU debt graph on the blockchain we should consider the following properties:
- Issuing a new IOU debt must be associated with a digital identity. I should be able to issue a new IOU if I can sign with my private key that I am the issuer of the given identity. 
- There should be a balance for each identity on the blockchain accumulating the numbers how much do I own for someone and how much am I owned by someone. The balance can be changed only if someone creates a new IOU requires digital signatures. 
- The IOU network can be optimized either by everyone or by special optimizer roles. Network optimization can be executed only in a way that non of the account balances changes.
- The effect of the network optimization should be a decreased complexity of the graph, which can be manifested for example as the amount of edges of the depth graph, or the edges weighted by the debt amount. 
- The decreased complexity should be incentivezed, the increased complexity should not be allowed.  - The optimization should not be a totally independent round, optimization should run parallel and consistent with the issuance of new debt.
- It is an open question of a debt can be transferred or traded explicitly without the optimization mechanism.   

Hybrid blockchain applications and network segments


Some of the blockchain applications are not meant to be totally decentralized but they should work together with the corporate internal IT infrastructure. Such a solution requires special considerations, as the mission critical business logic is separated into two parts: 
1. on the one hand, critical decentralized business logic will run on the blockchain as smart contracts. 
2. on the other hand, critical centralized business logic should run on the corporate intranet, absolutely separated from the internet. 

To integrate these two requirements might seem to be contradictory for the first sight, what you can do to create a secure infrastructure is to have some proxy Blockchain nodes on the DMZ segment of the system to create an integration with the live blockchain network. On the other hand there should be some internal nodes in your internal network segment to be able to communicate with your internal centralized business logic. To integrate the two versions of the blockchain nodes, there should be a couple of mechanisms regarding offline signs and transferring the offline signed transactions from your offline nodes to your online nodes.     

Modelling IOU contracts on the Bitcoin blockchain


One way to model an IOU contract on the Bitcoin Blockchain is to send them as a transaction with timelock. It practically means that the contract can not be mined if the time is earlier than the given timelock, with other words it works indirectly as a promise that I will pay a certain amount of money after a certain time. However, it does not work exactly with the same logic as a normal IOU contracts. The problem is that this transaction is not really in the blockchain itself, but only in a transaction pool, which means it can be overwritten any time with a double spending and after the given timelock it is only up to the miners when exactly the transaction will be placed into the blockchain.  

Sunday, July 22, 2018

Proof of useful work via optimizer transactions


Proof of useful work is one of the holy grail of the blockchain space. Although there are initiatives, like proof of useful work via proof of stake, where coins at stake would be generated at actually computing something useful, the scheme does not provide the possibility to integrate the useful work into the transaction processing itself. Platforms like decentralized IOU networks would require that the transaction processing and optimization form one common system and optimization is integrated part of the mining or transaction validation. 

One way of doing would be to have a separate kind of a transaction, called optimization transaction and a special type or role called optimizer. Optimizer would analyse the state of the last known blockchain and the other proposed but still not mined optimization transactions and it would propose new optimization transactions. These transactions would be placed in a special transaction pool, containing only the optimization transactions. The task of the miner is to collect a set of normal and optimization transactions and put them into a block in a way that the whole set is consistent. Consistency here does not only mean avoiding double spending, but it might be actually something more complicated. A new block is formed by a set of standard and optimization transactions, and the new system state appears as these transactions are applied to the existing state. 

Certainly, it is an open question how exactly the system can be fine-tuned, among the others, the following points should be considered:
- Blocks should be motivated to contain both standard and optimization transactions on average, otherwise we land either on a standard blockchain or on a kind of a  purely optimizer based structure. However finding such a set of transactions should happen in a computationally efficient way, otherwise there would not be any guaranteed blocktime.   
-  Optimizers should be motivated to create as efficient optimization transactions as possible. The incentive mechanism can be based like on the measurement of the efficiency of the optimization algorithm, an internal cryptocurrency, transaction fees ...
- It is an open question how the optimization transactions can be combined with the standard transactions in a cryptographical sense ?
- It is another open question how the 2 member market will change with the appearance of the third party (optimizer), it is similarly a question how classical attacks against the blockchain will be effected ?

Friday, July 13, 2018

Fabric composer tips and tricks - revoking CRUD right for a resource


If you revoke the Create, Update or Delete access for a role in Hyperledger Fabric Composer, you have to pay attention to the following things:
- The evaluation of the ACL file is executed rule by rule, the first rule that matches the to the participant, asset and operation will be applied. 
- If no rule matches but there is an ACL file, the access will be denied. 
- So one way to do it is to give a general access to the given participant which is evaluated if no deny rule is found. If you use the Hyperledger Fabric Composer online playground, you might as well give for the user during the testing access even for the system resources as well, because otherwise you can not test your code with that identity from the playground.
- As a first rule however you have to explicitly deny the operations on the specific resource. 

Rule first: 

rule UserHasNoCreateUpdateDeleteRirght {
    description: "User can not create update or delete "
    participant: "org.model.User"
    operation: CREATE, UPDATE, DELETE
    resource: "org.bicyclesharing.model.Asset"
    action: DENY
}

Rule second: 

rule UserHasAccessForAll {
    description: "User role has access for everything"
    participant: "org.model.User"
    operation: ALL
    resource: "**"
    action: ALLOW
}



Blockchain solutions and operational cost structure


At designing a solution on the top of the blockchain requires always special considerations regarding costs, because of the transaction cost of most of the DLT platforms. The classical use-case is always the fully decentralized model, where the end-users communicating with the platform pay the transaction cost explicitly on their own. However there are sometimes requirements that would differ from this classical situation: 
- One way can be that a company or provider would like to offer services on the blockchain and want to take over the transaction fee of the end-user. In this use-case, the easiest way is to add the customer accounts to a pool of customers and monitor have a certain budget which can be stored in a smart contract for certain operation to be paid automatically. If this budget will reach a certain level because of too much expenditure, it has to be renewed. 
-   Another model is to innate some kind of a transaction autocratically, certainly, it is not possible directly from a third party, however there is a chance to initiate such a behavior from a trusted or semi-trusted third party. However, in this scenario as well, the accumulated transaction fee must be paid by the semi trusted-third party, so appears as a kind of operational cost. 

The aggregated transaction fee can be estimated in these situations as the average number of transactions to be executed times the average transition fee and appears as an operational cost of the system. The use-cases must be carefully analyzed as the operational cost in certain situations can enormous.

Wednesday, July 11, 2018

Blockchain development and project management


Blockchain development requires a new way of project management, let we call it SF-Agile, meaning that if it is possible agile, but basically Security First: security is the highest priority, agile is only the second. From a practical point of view until we are on a test network, the development itself can be agile, however as soon as we want to deploy to the live network, the requirements have to be fixed, everything has to be unit tested and possibly formally analyzed and verified. So in the last step Security First has to very strongly dominate, in this sense this step is similar to waterfall project.

Monday, July 9, 2018

Estimating the cost of a Blockchain project


Estimating the cost of a blockchain project is pretty tricky, as there is generally not much experience in the field. Usually it is not estimated apriori because on the one hand there are a lot of unexpected technological challenges and on the other hand the requirements are unclear as well. For this reason usually there is no cost estimation at the beginning of a project but the project is delivered in an agile way starting with a proof of concept, continuing with a working prototype, which is followed by the fine tuning of both the requirements and the technological architecture. 

Anyway, blockchain projects usually have the following factors to be considered, most of them based on the architectural requirements of the project:

- Smart contract development: key element of the architecture is the smart contract, developing such contracts is more or less resource intensive. Like in case of a solidity, Ethereum development of the code itself is not too much energy, however a lot of efforts have to be put into testing, quality assurance and formal verification as the contract will be deployed into a public immutable ledger. 

- Blockchain infrastructure: at public blockchain platforms, the blockchain infrastructure both the live and the different test networks are usually configured. For consortium blockchains, the whole infrastructure has to be planned, configured and delivered, which might be a huge task. There are some initiatives however to speed up the infrastructure deployment of a consortium blockchain networks, like Azure blockchain from Microsoft or Chello project from Hyperledger

- Fronted or UI: Fronted and user interface is usually required for every blockchain application. It usually requires classical fronted development with javascript, HTML and CSS. There are experimental platforms however where you can speed up the fronted development. Like in Azure Blockchain Workbench, you can click together some of the UI elements, or in Hyperledger Fabric composer, where based on the business blockchain definition file, there is the possibility to automatically generate a basic user interface. 

- Storage: If storage is required, the major question is if it is centralized or decentralized. If it is decentralized, like in IPFS or Swarm, usually the smart contract developers manage this part of the development as well. If it is a centralized storage, like a classical file system or an SQL DB, the integration must be implemented. There are already frameworks which can speed up some blockchain integration with existing storage systems, like Azure Blockhain from Microsoft.  

- System integration: In enterprise blockchain application, exiting legacy systems, like databases, existing ledgers, authentication providers... should usually be integrated with the Blockchain. This can mean atomic swaps, or simply import - exports. Some enterprise blockchain systems already provide experimental services in this direction. 

- Non-technical roles: usually some management roles should be considered as well in a Blockchain project, like a project manager if the project is getting more complex, a Blockchain consultant and requirement engineer to fine tune the business requirements.  

Fabric composer tips and tricks - issue identity programmatically


If you want to issue a new identity for a participant, you can do it only from outside of your chaincode. Unfortunately there is no support for issuing identities inside your transaction or contract, which might make sense, hence this kind of a management is usually not part of the transnational logic. If you want to issue an identity from the outside world, what you have to do is the following:

1. Get your business network connection:

const BusinessNetworkConnection = require('composer-client').BusinessNetworkConnection;

2. Connect to your network, with a certain network card:

await businessNetworkConnection.connect('admin@mynetwork');

3. Issue new identity:

let result = await businessNetworkConnection.issueIdentity('Participans#email', 'name);

4. Close the connection:

await businessNetworkConnection.disconnect();  

Saturday, July 7, 2018

Fabric composer tips and tricks - create an event


Creating and rising an event is not complicated at all in Fabric composer. You need the following steps:

1. Define your event in the mode file

event MyEvent {
  o <type> Property1
  o <type> Property2
}

2. Create your event in the your transaction javascript logic.

 let myEvent  = await getFactory().newEvent(namespace, 'MyEvent'); 

3. Filling the event properties:

  MyEvent.Property1 = <value1>;
  MyEvent.Property2 = <value2>;

4. Raising the event:

  emit(MyEvent);


Fabric composer tips and tricks - update assets or participants


To update an asset in Fabric composer in the transaction, you have to do the following steps:

0. Supposing you have a model file and an asset:

  asset AssetName identified by <primkey> {
    o <type>  <primkey>
    o <type>  RequiredField1 
    o <type>  RequiredField2 
   ...
  }

1. get a reference  to you asset. the reference usually comes from the input parameter, or following the object-graph of your input parameter. Optionally, you can query the asset registry for a certain asset or a set of assets. Having the reference, update the required fields.

  assetReference.RequiredField1 = newValue1;
  assetReference.RequiredField2 = newValue2;
    ...


2. get a factory object: it helps you to create other resources, events, assets and so on.

  const factory = getFactory();


3.  Get an asset registry to update your asset:

  const assetRegistry = await getAssetRegistry(namespace + 
    '.AssetName');

4.  add the new asset to the asset registry.

   await assetRegistry .update(assetReference);

optionally, you can update several references parallel.

   await assetRegistry.updateAll(assetReferences);

The participant case is similar, the only difference is that instead of the asset registry you should use the participant regisrtry.


Fabric composer tips and tricks - delete all data


If you want to delete all the data of a certain asset or participant category like in the case of a cleanup transaction, you have to do the following:

1. getting the asset or participant registry:

  const myAssetRegistry = await getAssetRegistry(namespace + 
    '.MyAsset'); 

2. querying the all elements of the given asset or participant registry:

  var allAssets = await myAssetRegistry.getAll();

3. delete all of the queried data:

  await myAssetRegistry.removeAll(allAssets);

Fabric composer tips and tricks - create a new concept


To create a concept in Fabric composer in the transaction is pretty similar to creating an asset, we consider the use case as you have to add a new concept as creating a new asset. you have to do the following steps:

0. Supposing you have a model file and an asset with a concept:

  concept ConceptName {
    o <type> Field1
    o <type> Field2
    ...
  }

  asset AsstetName identified by <primkey> {
    o <type> <primkey>
    o ConceptName ConceptField 
    ...
  }

1. get a factory object: it helps you to create other resources, events, assets, participants and so on.

  const factory = getFactory();

2. create a new concept with the help factory.newConcept after that  create a new asset with the help of the factory newResource call and with a new value for the primary key. After that update all the required fields both for your concept and for your asset and then associate the concept with the adequate field of your asset. 

  const newConcept = await factory.newConcept(namespace,
     'ConceptName');
  newConcept.Field1 = <value1>;
  newConcept.Field2 = <value2>;

  const newAsset = factory.newResource(namespace, 'AssetName',
     <primkey>);
  newAsset.ConceptField = newConcept ;
  newParticipant.RequiredField2 = value2;
    ...

3.  Get an asset registry to update your asset:

  const newAssetRegistry = await getAssetRegistry( 
      namespace + '.AssetName');

4.  add the new asset to the asset registry.

   await newAssetRegistry.add(newAsset);