...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego
Showing posts with label cryptoeconomy. Show all posts
Showing posts with label cryptoeconomy. Show all posts

Sunday, January 5, 2020

An Austrian economical interpretation for the last 3 years of Blockchain


The Austrian school of economics provide a pretty good interpretation what was happening in the Blockchain space in the last couple of years. The ICO and token sales hype for three years caused a lot of investment coming into the space totally uncoordinated. The reason for this money pump was not only a speculation in the technology but the fact that the token sales technology liberated and disrupted the whole funding a investment industry. Due to regulation issues the investment source  was basically cut and a lot of company found itself in the situation of having the invested money but not capable to build up a system that can be sold to any customers. The last two years of market and technology free fall was among the others a cause that these companies not able to find real market for their products were slowly liquidated. 

Thursday, December 6, 2018

SLA with cryptoeconomical service guarantee


Service level agreements are important parts of every corporate infrastructure and for companies providing infrastructure or even software as a service products. Service level agreements provide at least a theoretical guarantee that if the service is not provided in a promised way, there can be something done. However to really do something if the SLA is not hold is difficult, starting a legal process might take years and cost a lot of money, finding a new provider and migrate existing services to the infrastructure of the new provider might again take month and can be pretty costly. 

A good solution can be however to introduce cryptoeconomical guarantees as insurance for a service, similarly as the Ethereum initiative Swap, Swear and Swindle does for off-chain computation and services. SLA parameters have to be formalized in an explicit and measurable way, and in case the service is not provided, the consumers of the service should be able technically, in a reliable way prove that the SLA was not hold. The service provider should make an insurance up to a certain amount of cryptocurrency that money is held online in a smart contract. Customers of the possible service providers might choose a provider not just based on services but on the fact how much it is cryptoeconomically secured, with other words how much cryptocurrency is locked as an insurance or security deposit in case there are problems with the service If a customer experiences problems with the SLA, an cryptogaphical proof can be generated for that, this proof can be validated by decentralized validators and in case the customer has right a certain amount of cryptocurrency can be transferred, up to the security deposit of the service provider. The whole process might run fully or  almost automatically, practically in minutes.     

Saturday, December 1, 2018

Notes on the strengths of the cryproteconomical guarantees



Cryptoeconomical guarantees are actually weaker than the pure cryptographical ones. Cryptoeconomical guarantee means that hacking the system is not profitable, meaning that more money will be lost than gained. As an example proof of steak is based on such a cryptoeconomical guarantee, that states that if I want to hack the system and it become well known you lost all of your stake. As it is a logical motivation that most of the people or rational actors behave economically rational, meaning that they tend to maximize the profit or benefit in some sense. However, the problem is that most of the economical guarantees exist only in the system itself based on assets and resources that are defined by the system itself. So, it is certainly if I try to hack the system in a proof of stake I will loose my stake, but perhaps I my economical rational profit maximization is not based hundred percent to the systems itself. As an example, if the hacking attempt will be well-known, the general trading value of the platform against like USD might fall, that means that I can make a profit from shorting and I might as well do profit even if I loose my stake. 

On the contrary cryptographical proofs use another scarce resource, which is usually computation. Simply put, most practical cryptographical algorithm assumes that breaking the algorithm is computational infeasible meaning that it would take millions of years even with the most advanced, state of the art computers. This is a kind of practically impossible guarantee which is much stronger  than a not profitable one. 

As a consequence, a cryptographical or computational guarantee is always stronger than a simple cryptoeconomical one, mostly because economical rationality can be interpreted only in the system but in general economical rationality is something more complex. 

Perhaps it is important to note though that proof of work is actually an economical guarantee not a cryptographical one, because the amount of computational power to break the consensus does depend on the competition of the miners. With other words, it depends on the amount of money invested to mining equipment.   

Sunday, November 18, 2018

Notes on multi block algorithms and protocols


The research of the decentralisation is focusing pretty much at the moment for the scalability of different protocols and platforms. Bases on the current research directions, there might as well efficient blockchain protocols in a couple of years. So, we might as well investigate the possibilities of creating algorithms and protocols that can not e executed in one block or one transaction but can only be realized by several actions, crossing several blocks. Actually, layer 2 protocols, like lightning network or raiden are going a little bit in this direction. Multi block protocols can provide many services that are not imaginable with current one block architectures. 

How to create native external oracle with Nakamoto consensus





Similarly to the previous blog desisgning a decentralized native external oracle can be realized in a same way as a native random oracle. The principle is basically, the same: on the one hand the miners or validators measure the external data sources and put them into blocks or at least temporal blocks. On the other hand, imported data should be kept in secret, because otherwise new miners could influence or even hack the algorithm itself. 

The full multi-block native external oracle algorithm can be described as follows:

1. An imitator having a {Priv, Pub} private and public key pairs creates a external oracle request that includes the pubic key as well:

request_extern (Pub, N, ext)

, where Pub is the public key of the requestor, ext is the external data source and  N is the number of round during the external data source has to be measured.    

2. At mining or validation, a miner or validator measures the value of the external data source that is encrypted by the public key of the reqestor and put at in the request itself. So after the first validation, the request will look like:

request_extern (Val1)(Pub, N-1, ext) 

where Val1 is the measured external data encrypted by the public key. To ensure security Val1 value should be put into the blockchain as well, not necessarily forever but at least during the N phase of execution of the native external oracle. So with other words, the request would be splited into two parts:

request_extern (Val1) 

will be written into the blockchain

request_extern (Pub, N-1,ext)

can be available as a new transaction request that is propagated throughout the network in transaction pools, waiting to be mined or validated. Similarly, after k<N rounds, there would be k encrypted external values in the blockchain:

request_extern (Val1)
request_extern (Val2)
...
request_extern (Valk) 

and a new request as a transaction which is 

request_extern (Pub, N-k, ext)

3. After N rounds, the requestor can aggregate the external values and decrypt them with the Priv private key. 

Ext1 = Decrypt_Priv(Val1)
Ext2 = Decrypt_Priv(Val2)
...
ExtN = Decrypt_Priv(ValN)

The individually external values should be aggregated in a way that the they provide a correct average value even if some of the nodes try to hack or game the system. The algorithm should contain an incentive mechanism as well, giving a reward to nodes that gave correct values in this way motivating nodes to produce correct data, providing a Schelling point as decision making algorithm. Supposing that around one third of the nodes and measurements can be fault, we can have the following algorithm:

a. filter 33% of the most extreme values
b. make an average of the rest that remained providing the real external value
c. reward the nodes whose values were not filtered out based on the distance of the average.

The algorithm has unfortunately some performance issues: It takes N rounds to find out a real decentralized external value. This can be pretty long and pretty costly depending on the given blockchain platform. Considering a Nakamoto consensus the algorithm is pretty difficult to speed up, as the basic idea is that the external values are comming from N individual different sources that actually means at a Nakamoto consensus N different blocks. This implies the fact as well that the data source should keep the value for a long-enough time, like preserving the same value for hours. The algorithms can not really be used with fast-changing data sources.   

A further question can arise how the data exactly used like within a smart contract. As private keys should not be stored in the blockchain, there should be an additional round with the wallet software to encrypt and aggregate the information which might introduce elements of unnecessary centralization and difficult to build in directly into a decentralized smart contract. It is important to note however that this private key is not necessarily the same as the private key of the account so it can be actually revealed as soon as all the N nodes created a guess for a random number. As a consequence a second transaction can be used to calculate the real random number, like:

evaluate_extern (Priv)

At this moment the private key can be published to the blockchain and the evaluation algorithm can be implemented with a smart-contract in a decentralized way.

From a practical point of view a smart contract with built-in native external oracle would look as:

function with_ext (input params) return (output params) {
   ...
   variable ´= External(<external_datasource>);
  ...
}

for evaluating such a smart contract, there should be two transaction used:

- the first one would call the function with a public key of the external oracle making the initialization in the external data measurement.

transaction init with_ext pub_key_ext

- the second transaction would publish the private key and make the whole evaluation and the execution of the rest of the business logic, like:

transaction exec with_ext priv_key_ext

The proposed system has unfortunately one denial of service attack possibility. The one who initiated the external oracle has the private key and can calculate the result previously. If this result does not make benefit for him, he choose not to reveal the private key or not to execute the second transaction.  

Saturday, November 17, 2018

How to create a native random oracle with Nakamoto consensus


Creating a real native random oracle is one of the holy grail of the blockchain industry. As the problem is not so difficult if the consensus mechanism is a quorum as the nodes participating in the consensus making decisions independently from each other, it is more difficult at a Nakamoto consensus. The problem is with the Nakamoto consensus that the temporal leader creating the next block is practically a "dictator" of the next block and can influence like the random number. The algorithm can be however improved here as well with two ideas:
- creating a real random number is taking several round where several node guesses a random number which are then aggregated at the end. Certainly this is not necessarily a real solution as active leaders might see the previous random values and might influence the next influence in way that is profitable to the nodes. To avoid such a situations we could use the following idea:
- the random numbers are encrypted by a public key of a requestor. As a consequence the next node do not really see the previous values of the previous blocks, so it can not influence the final result.

The full multi-block native random oracle algorithm can be described as follows:

1. An imitator having a {Priv, Pub} private and public key pairs creates a random oracle request that includes the pubic key as well:

request_random (Pub, N)

, where Pub is the public key of the requestor and N is the number of round during the random number has to be generated.    

2. At mining or validation, a miner or validator creates a native random number which is encrypted by the public key of the reqestor and put at in the request itself. So after the first validation, the request will look like:

request_random (Val1)(Pub, N-1) 

where Val1 is the generated random number encrypted by the public key. To ensure security Val1 value should be put into the blockchain as well, not necessarily forever but at least during the N phase of execution of the random oracle. So with other words, the request would be splited into two parts:

request_random (Val1) 

will be written into the blockchain

request_random (Pub, N-1)

can be available as a new transaction request that is propagated throughout the network in transaction pools, waiting to be mined or validated. Similarly, after k<N rounds, there would be k encrypted random values in the blockchain:

request_random (Val1)
request_random (Val2)
...
request_random (Valk) 

and a new request as a transaction which is 

request_random (Pub, N-k)

3. After N rounds, the requestor can aggregate the random values and decrypt them with the Priv private key. 

Rand1 = Decrypt_Priv(Val1)
Rand2 = Decrypt_Priv(Val2)
...
RandN = Decrypt_Priv(ValN)

The individually generated random numbers should be aggregated in a way that the randomness is preserved even if some of the values are not really randomly generated. The exact algorithm here is questionable, but it is important that the original entropy of the requested random number is maintained even if some of the nodes are cheating. Ideas might be:

sha256(Rand1, Rand2, .... RandN)
sha256(Rand1 xor Rand2 xor ... RandN)
sha256( ... sha256(sha256(Rand1))... )
  
The algorithm has two drawbacks that can be fine-tuned on a long run: 
- The algorithm takes N rounds to find out a real decentralized random number. This can be pretty long and pretty costly depending on the given blockchain platform. Considering a Nakamoto consensus the algorithm is pretty difficult to speed up, as the basic idea is that random numbers are comming from N individual different sources that actually means at a Nakamoto consensus N different blocks. 
- Based on the evaluation algorithm we should assume that some of the miners or validators created a good random number. Consider a good byzatnine evaluation function at the end even if some of the nodes cheat the resulting random number can be a good one, like cryptographically secure. The problem is however that even the hones nodes are not really incentivized to create good random numbers, hence we can not really measure or punish if nodes produce good random numbers. It can be certainly an assumption that certain number of the nodes are honest, but actually it would be much better to measure and validate this fact. 

A further question can arise how the data exactly used like within a smart contract. As private keys should not be stored in the blockchain, there should be an additional round with the wallet software to encrypt and aggregate the information which might introduce elements of unnecessary centralization and difficult to build in directly into a decentralized smart contract. It is important to note however that this private key is not necessarily the same as the private key of the account so it can be actually revealed as soon as all the N nodes created a guess for a random number. As a consequence a second transaction can be used to calculate the real random number, like:

evaluate_random (Priv)

At this moment the private key can be published to the blockchain and the evaluation algorithm can be implemented with a smart-contract in a decentralized way.

From a practical point of view a smart contract with built-in native random oracle would look as:

function with_rand (input params) return (output params) {
   ...
   variable ´= Rand();
  ...
}

for evaluating such a smart contract, there should be two transaction used:

- the first one would call the function with a public key of the random oracle making the initialization in the secret random numbers.

transaction init with_rand pub_key_rand

- the second transaction would publish the private key and make the whole evaluation and the execution of the rest of the business logic, like:

transaction exec with_rand priv_key_rand

The proposed system has unfortunately one denial of service attack possibility. The one who initiated the random oracle has the private key and can calculate the result previously. If this result does not make benefit for him, he choose not to reveal the private key or not to execute the second transaction.  

Tuesday, October 30, 2018

On the need of tokenized business and computational models.


Inevitable to most natural style of designing a blockchain application is to imagine a kind of a token model as a basic working mechanism. For that however we would need both the missing theory and practice as well to work with tokenized models, like inventing tokanized business models and/or tokenized computational architectures. Examples include but not limited to:
- tokenized data flow
- tokenized Turing machine
- tokenized Neumann architecture
- tokenized accounting systems and tripple accounting
- tokenized business management
- tokenized business models
- tokenized business cooperation models
- tokenized machine learning
- tokenized AI
- ...
And last but not least we would need general frameworks to abstractly model and describe tokens and collaboration of tokens. 

Wednesday, October 24, 2018

How to create a voting algorithm on a public blockchain


Creating a decentralized voting system on the top of the blockchain is actually not as easy as it looks like. There are many problems that can be resulted in a wrong implementation. The problem is that blockchain is pretty much public, even if we speak on a consortium blockchain solutions, data and transaction is public at least to the consortium nodes that might be less privacy as needed. Some of the problems might be the followings:

- The identity of the voters should not be revealed: for such a purpose there might be the possibility to use a pseudonym voting solution, where instead of names, we simply use like an ethereum address. Certainly, here the major challenge is to distribute the pseudonym addresses and making sure that one person votes only once. 

- The actual votes should not be revealed during the voting because they could influence the result of the voting. One way can be to set the voting into two phase: in phase one, voters vote with a hash value of the vote and some random salt. In the next phase, there is only aggregation of the votes where the participants reveal the voted values and salts. Certainly, it is pretty much questionable how the algorithm exactly works, like what happens if the some actors vote but do not reveal the votes in the next round. 

- If both the votes and the identities should be 100% hidden, meaning that not even pseudonym information can be revealed than it is pretty much questionable if it can be solved technically at all. Probably specialized zero knowledge proofs can help with the situation. 

- General question is how the voting rights can be distributed in a decentralized way. Who or what should be able to vote and how they get that right to vote.  

Monday, October 15, 2018

Identity is a scarce resource in consortium blockchain


To avoid a naive sibyl attack in most of the decentralized consensus protocols there is a scarce resources that needs to be use in order to take part in the consensus mechanism. This resource is computational power at proof of work and it is a kind of a cryptocurrency at proof of stake. Actually, consortium systems do not differ very much from this idea. In consortium systems, the scarce resource is the identity: only nodes with a special distributed identity are able to participate in the consensus. In this sense it works similarly as the public blockchain networks : the scarce guarantees that none with a huge bot network, but without the scarce resource can influence the consensus voting. 

Saturday, October 6, 2018

Hashgraph censorship


The Hashgraph technology is aimed to be a fair censorship resistant, byzantine fault tolerant technology having a public network as well, where theoretically there is no need to ask a permission form the company Swilrds or from consortium members of Hedera and the network will eventually as public as for instance Ethereum. Recent achievements  suggest however that the situation wont't be so clean and simple. The recent direction is to censor social media appearance that do not match into certain corporate policies, certainly the official names are "quality improvement" and "policy compliance", but actually most dictatorial governments have the same names censoring the local social media for filtering out the content that they do not like. Such a censorship raises some serious ethical questions even if it is carried out by a corporation, but it is surely not acceptable at an open distributed ledger platform. Censorship of the social media will very fast result in the censoring of individual transactions and decentralized applications on the platform. 

So, let we see how Hashgraph can censor your transactions:

- The easiest way is blocking incoming transaction at peer level, simply not including into the Hashgraph by dropping the incoming transaction. For that actually it is not necessary to modify the Hedera software itself, incoming transactions can be filtered out on the operation system or on the network level, like filtering out certain transactions from a certain IP address. Certainly, a customer will eventually recognize if a transaction is dropped and will resend, possibly eventual to another node. As a consequence censoring at peer level works theoretically only if all of the peers drop the transaction. However, from a practical point of view a customer will probably give up including the transaction in the Hashgraph after a certain number of attempts. On the other hand, resending too many, like 20, times is probably not acceptable for most of the applications and use cases, so from a practical point of view blocking transactions at several peers, like in a geographical area can make certain applications not-usable, even if from a theoretical point of the transaction is 100% blocked.

-   A similar censorship attack is to introducing a random delay for incoming transaction at the peer level. It is again something that can be carried out without modifying the official source code. If a transaction is 100% blocked, the customer will recognize it and resend it, possibly for another node. However if the transaction only delayed randomly it is more difficult to recognize. As a couple of minutes or hours random delay in the transaction processing is not acceptable for most of the decentralized applications, this can be an efficient denial of service attack for most of the distributed apps even if it is realized by one or two nodes.    

- As the transaction is included in an event, it is propagated with a gossip protocol throughout the network. You can use similar techniques here as well, like blocking event propagation that contains a certain transaction, or delaying and random blocking events containing a transaction. As the peers are connected with TSL, such an attack is difficult to realize by an external attacker, however it can be easily realized by the operators of the nodes as a censorship attack, again without modifying the code itself, implementing everything on the operation system or network side. Blocking a transaction or event will be efficient if more than one third will actively block, however at random delaying a much smaller number of number of censors could effectively make a DApp unusable. 

- There might be some further possible censorship possibilities at the state level and at the point as a transaction is applied to the state. As the implementation details are not known here, such attack vectors need further investigation.

- Last but not least, Hedera is basically a consortium that controls the code. It is pretty much an open question how the software upgrade mechanism will work what is sure that if the consortium votes against your transactions or DApp, a new software version can be delivered that will efficiently block your transactions and DApps. In practice, this process might be accelerated, as upgrading a new software version can be a slow process. I would probably create a configurable element in the software code itself where operators can individually set the blacklisted applications. It probably provides a further censorship attack vector.   

Monday, July 30, 2018

Cryptoeconomical attacks on Blockchain applications


It is actually a weird thing to identify attack surface for a blockchain based system. The major problem is that they are not purely software architectures, but rather complex systems containing both cryptography and software architecture components and elements based on economy. As a consequence "hacking" or "gaming" such a system is usually not purely a simple software engineering task. There can be the following types attacks:
- Classical attacks: like trying to break the cryptography, or exploiting an implementation vulnerabilities.  
- Monetary attacks: these exploit the fact that a token or several tokens are actively traded on a couple of exchanges. As an example, pump-dump scheme or perhaps even shorting against a token or cryptocurrency can be regarded as such an attack. Sometimes such an attack is not clearly monetary, but for example is combined with a negative social media campaign. 
- Certainly, there might be hybrid attacks as well, that try to exploit some system implementation errors combined with an economical "gaming". For such categories a new field of cybersecurity should be probably defined. 

Thursday, July 26, 2018

Blockchain and evolutionary algorithms

From a conceptual perspective a blockchain can be regarded as a kind of a evolutionary algorithm, which is driven indirectly by market forces. The genesis block might contain something as a set of initial variables, which we can regard as a the initial population. The population is reproduced in each round with the help of different market factors and human interactions. The variables might have new values, they might recombine different values to new ones, and even brand new variables can appear as well, mimicking a little bit something as recombination or mutation. Certainly, it is questionable if clear algorithms can be imagined for representing something with a genotype or a fenotype. One such an example is the cryptokitties, where actually both mutation and recombination are defined in a clearly specified way, however they might be not the only application that can be regarded as blockchain based market evolution. 

Tuesday, July 24, 2018

IOU debt graphs and trust lines on the blockchain


We were briefly brainstormed how IOU debt network can be represented on the blockchain and how it can be embedded into a mining process in the previous two blogs:  debt graph and mining. We can extend the model with trust lines as well, meaning that the IOU network can be optimized and a new IOU can be issued only if there is an existing trust line between two participants. Trust line can be represented again with a matrix T, where

T[i,j] = 0 if participant i does not trust participant j and
T[i,j] = 1 if participant i trusts participant j

It is an open question what further properties shall a trust matrix have. As an example, it might be reflexive and or transitive:

reflexive: if T[i,j] = 1 then T[j,i] =1 as well
transitive: if T[i,j] = 1 and T[j,k] = 1 then T[i,k] = 1

Having a trust matrix means that there is the possibility to issue a credit or to optimize a relationship between i an j if and only if T[i,j] = 1. Certainly, it is pretty much an open question what should happen if a trust line can be deleted as well.   

Monday, July 23, 2018

Optimizing IOU debts and mining


As we have seen in the previous blog, debt optimization is practically proposing a new directed graph structure in a way that balances of the individual accounts do not change. The easiest way to represent the debt graph is the adjacency matrix, where each A[i,j] element represents the the IOU contract from i to j. Based on that representation, we can formally define the balances of an account as well: 

Balance i = Sum (A[i,j]) - Sum k (A[k,i])

Considering a general mining process, there can be several {IT1, IT2, ... ITN} transactions issuing new IOU-s each transaction is signed by its creator. On top, there is a set of {OT1, OT2, ... OTN} optimization transactions either signed by trusted optimizer nodes, or by nobody. Both sets of transactions are in one two separate transactions pools. The idea of mining is to find a subsets of {IT1, IT2, ... ITK} and {OT1, OT2, ... OTK} transactions in a way that for all account balances, the change is initiated by only by the issuing transactions, meaning that:

Balance i (new) = Balance i (old) + Sum (OT[i,j]) - Sum k (OT[k,i]), where OT is a matrix built up by the {OT1, OT2, ... OTK} transactions. Certainly, the complexity of the network has to be reduced due to the optimization transactions, it is an open question how it can be measured. 

Based on these definitions, there can be a one shot or a two round transaction process: 
- if we imagine two rounds, the first round is a purely optimization round as the second one is a classical transaction round. 
- In a one shot process, both optimization and the new transactions take place. 




  


Optimizing IOU debt graphs on the blockchain


At optimizing an IOU debt graph on the blockchain we should consider the following properties:
- Issuing a new IOU debt must be associated with a digital identity. I should be able to issue a new IOU if I can sign with my private key that I am the issuer of the given identity. 
- There should be a balance for each identity on the blockchain accumulating the numbers how much do I own for someone and how much am I owned by someone. The balance can be changed only if someone creates a new IOU requires digital signatures. 
- The IOU network can be optimized either by everyone or by special optimizer roles. Network optimization can be executed only in a way that non of the account balances changes.
- The effect of the network optimization should be a decreased complexity of the graph, which can be manifested for example as the amount of edges of the depth graph, or the edges weighted by the debt amount. 
- The decreased complexity should be incentivezed, the increased complexity should not be allowed.  - The optimization should not be a totally independent round, optimization should run parallel and consistent with the issuance of new debt.
- It is an open question of a debt can be transferred or traded explicitly without the optimization mechanism.   

Friday, April 13, 2018

Notes on designing a cryptoeconomical protocol


Designing a cryptoeconomical protocol is almost as difficult as designing a cryptography protocol even if we consider only the economical part of the system. The economical part should be designed as much on the engineering principles as possible, including at least the following considerations:
- clear strategy what should be incetivized and what should be discouraged. 
- consideration of some economical attacks, like byzantian attack, bribing attacks, collaboration of attackers...
- economic simulations and assessments what should happen in different normal and extreme situations.
- considerations for scaling the network.

Barriers of consortium blockchain adaptation


The biggest barrier and drawback of the consortium blockchain solutions should not be looked in the technology itself. Consortium blockchain solutions best suited in industry fields with many different actors and companies. Delivering a solutions to such an industrial field require the an efficient coordination between many different companies probably located in many different countries. Such a coordination require very skilled business analyst people and negotiators before the technology roll-out could be even started.  

Wednesday, April 11, 2018

Notes on stable cryptocurrency


Designing a stable cryptocurrency is a pretty difficult thing, even if it is possible at all. One thing that might work if the cryptocurrency is "backed" by a digital asset for that we can estimate both the supply and the demand side. Such a digital asset might be for example storage or computation. Both for storage and for computation we can estimate the supply side, which is like the Moore's rule and probably we can estimate the demand as well based on industry trends or historical data. The only thing that we have to do is to design a cryptocurrecny with a certain monetary policy in a way that the monetary policy mimics somehow the price of the backed asset. Certainly, there might be a possibility for unexpected changes in supply or demand, like for instance the appearance of a brand new technology, so the possibility should be given for manually monetary policy adjusting.   

Wednesday, April 4, 2018

Comparing Ethereum with Hashgraph

Based on the framework of the previous blog and dimensions, we can compare Ethereum with the new emerging technology called Hashgraph. The characteristics of Ethereum can be seen on the following picture:


And similarly the characteristics of Hashgraph can be seen on the following picture:



Comparing the two frameworks, we can see that there are differences in two areas. First of all Hashgraph is at the moment rather an extended consensus mechanism and not a full scale distributed ledger technology. That means that some elements are simply not defined exactly by the framework and are pretty much open for further implementation. Such things are related to the exact transaction realization.The computational part of the consensus can be pretty easily extended in java, however how exactly transactions are implemented is a point that provides the possibility to several implementations. In this sense properties like crypto-economics transaction semantics or transaction privacy are parameters that provide several possibilities in the future. The sure thing is that contrary to Ethereum Hashgraph best fit in consortium scenarios, the structure of the transaction storage is a non-blockchain one and both performance and fault tolerance are more than excellent.

Certainly, this article covers the original concept of Hashgraph produced by Swirlds, in the public version of the technology produced by the Hedera consortium it is planned to have more structure and services regarding on transaction, cryptocurrecnies and storage.

Tuesday, April 3, 2018

Comparing cryptocurrencies with Hyperledger Fabric

Based on the framework of the previous blog and dimensions, we can compare different cryptocurrencies like Bitcoin with Hyperledger Fabric. The characteristics of cryptocurrencies can be seen on the following picture:


The properties of Hyperledger Fabric can be seen on the following picture:


As the comparison shows as well, there are some major differences between the two technologies. As cryptocurrencies are intended to be a public network, Hyperledger Fabric is for consortium use-cases. This fact is visible on the transaction scope, higher transaction privacy, and better performance. Hyperledger Fabric implements a modular consensus mechanism, with several different algorithms, ranging from simple fault tolerance to simple Byzantine fault tolerance (maximum one node error). Another major difference is that Hyperledger Fabric does not implement any kind of a tokens, neither internal, nor external. Of course, transaction semantics is different as well, Hyperledger Fabric has a general purpose smart contract language however cryptocurrencies usually concentrate on one digital asset.