Execute your tasks on XEL

Avoid the cost of purchasing your own hardware or renting cloud services to execute resource intensive computational tasks. You can use XEL’s massive power resources and select the time you need for your task, while setting the price you are willing to pay, as a bounty reward. With XEL, you are controlling the cost of executing computations.

XEL’s highly flexible system enables you to execute any unique use case with XEL’s own programming language, ePL. To run a custom task, follow the tutorial. This will ensure an easy, streamlined process every time you want to execute a task.

Taker figure
Typically, the idea of a blockchain-based super computer is that you pay for others to solve your tasks and return the solution to you. Moreover, because you pay, you certainly want to make damn sure that you get the correct result. Well, XEL has solved that very issue pretty elegantly. The ePL language, as you will learn in the programming tutorial, has a beautiful property that allows the scientist, those who programmed the job, to introduce a verification function. If that verification is not passed, the result will not even be broadcasted by the network. All the solutions that you receive, therefore, can be seen as verified and correct.
Once you submit a task on the blockchain, this task is downloaded by nodes which contribute their computational resources to the network - let's call them "workers" for now. Workers that find actual solutions – which we also call "bounties" – are rewarded with a "bounty reward" by the scientist paid in XEL tokens. These bounty rewards are set by the scientist when the job is created and should be ideally set to an amount that attracts participation in running their job. As the network grows, competition among scientists to attract participation in running their job increases; therefore, scientists will want to calibrate their bounty rewards to a fair market value to ensure enough interest in their job.
Let us answer this with an example. Since training a Deep Neural Network is a very resource intensive tasks, other projects usually have to "assign" individual tasks to individual nodes and only allow this one individual node to work on it. This is the right thing to do because nobody wants to work for 2 hours on a render job only to find out that someone else was quicker. However, this also has one potential drawback: what happens with very slow nodes or nodes that acquire a lock but leave? Such behavior can cause the task to run significantly longer than it would on a home computer. Also, in how many slices do you cut those jobs and how many locks do you offer? This number certainly limits the amount of computation power you can get out of it all. If a task can only be cut into 100 individual pieces (because a smaller denomination would add too much overhead compared to the size of each piece), then you will only get 100 nodes working on your task at most - no matter if the network has 100 or 100,000,000 nodes active. XEL does not have this problem since the computational power of all 100,000,000 nodes would be available at once. This is possible due to how ePL is designed (one iteration of each program runs a few seconds at most) and how it allowed avoiding any locking at all (the search space is designed so large that it is unlikely that two nodes will work on the same task simultaneously).
With decentralization, there is no central point of control, enabling the possibility to expand and scale the network without having infrastructure limitations; the system also eliminates single points of failure and attack through the distributed network nodes.
Both, publishers of the task and the ones performing the task are not required to share their private information at any stage of the process. There is no need to trust third parties since the XEL platform handles the entire process in a trustless manor.
Don’t worry! Just follow along the tutorial for submitting a task. In case you are facing any difficulties you can always seek assistance by our supportive community. Check the platforms below.