Many scientific problems, e.g., the simulation of quantum mechanics, the non-linear optimization used in weather forecasting and the optimization and minimization of logic circuits, require computing resources that exceed those of any single computer by many orders of magnitude. The increasing adoption of these methods in today's science has been the driving force behind finding ways to create environments where these problems can be solved in a reasonable amount of time. As early as in the 1990s the term "grid computing" emerged; an approach that comprises pooling the computational resources of many individual computers into one holistic entity which can deliver its combined computational power as easy as accessing power from the grid ("The Grid: Blueprint for a new computing infrastructure").
The Data-Center in Our Homes
While setting up a local grid-computing cluster seems very straight-forward, it comes at a considerable cost. Not only do you have to purchase enough equipment to deliver the resources you need, but you also have to pay the recurring costs for maintenance and operation. This is not a very economical way for occasional calculations and one-time projects. A few approaches to decentralize grid computing, however, have been presented in the late 90s and build upon the fact that lots of people today have very powerful computers which are all connected to the internet. If you think about it, there is a considerable amount of computing power – not in supercomputer centers or laboratories but peoples houses. Platforms like Distributed.net and later SETI@Home and BOINC have shown ways to use these (user's) devices to form a more powerful grid-computer with almost no initial setup costs. However, these platforms mostly relied on (and rely on) on volunteer computing, i.e., on volunteering participants that are willing to contribute their computational power to projects they aspire. Needless to say, that more boring projects (such as your homework assignment simulation) have only little chances to pull in any reasonable amount of computing resources.
This Gap is Closed by XEL
While cloud-computing, a term which was first introduced in 2007, solved these issues, it mostly boils down to renting computational resources "in the cloud" from commercial providers. While these services are very reliable, scale very well and are easy to access, there is no real way of telling whether the prices are justified or not - renting 1000 instances for a day from one of the major cloud computing platforms will still cost plenty of money. Also, it will - in most cases - require you to create a work distribution and management logic, to coordinate your instances properly. Our project shows an alternative path to grid-computing "for everyone" and comprises the adaptation of all scientific advancements, which have been made since the 1990s, to the Blockchain. Similar to SETI@Home and BOINC, the work distribution is already handled by the XEL software. While being a significant contribution to ongoing and future Blockchain research, it has the potential to offer value to users, who currently need computational resources, as well. Here, no longer do the centralized providers decide about the fair value of a certain amount of computational power - the wisdom of the crowd does.
XEL, effectively, bundles the computation power of all connected nodes (that have the computation engine running - opt-in) and distributes it through Blockchain technology to self-coded, arbitrary tasks that can be coded by anyone using XEL’s own trustless programming language ePL. In this eco-system, nodes with powerful hardware are incentivized to contribute their unused computational resources by being rewarded with crypto-currency payments. Thinking in terms of Blockchain, this process can be thought of as a form of "mining," similarly to the process of mining traditional crypto-currencies such as Bitcoin.