Decentralized Compute: First Iteration

We are pleased to announce an important milestone in the Iagon ecosystem: the release of the first iteration of our decentralized compute feature. This launch signifies a key step forward in our ongoing development and showcases our commitment to enhancing the capabilities of our platform. Although we anticipate numerous enhancements and the introduction of new features in the future, we believe this initial release represents a significant advancement.

We invite you to participate in this development phase by testing the new feature, as we continue to strive towards a more efficient and integrated future in decentralized computing.

ℹ️
What is decentralized computing?
Decentralized computing embodies an architecture where a single authority does not govern data access or the logic housed within a network. Instead, control is distributed, relying on algorithms universally acknowledged by the network's nodes. This ensures a collaborative approach to data management and decision-making, free from centralized oversight.

Iagon Compute CLI (Test Node)

The Iagon Compute Node CLI is a command-line interface application that enables users to contribute their computing resources to the Iagon network, in exchange for rewards.

  • For instructions on how to use and get started, please follow the guidelines provided in the link.
  • Download the latest Iagon Compute CLI Node version - check it here

Iagon Compute Node Dashboard

To authenticate your Compute Node, please visit the test dashboard via this link.

💡
Ensure you use the Cardano Preview network for testing purposes.

Once logged in, you can authenticate your node by entering its details. This will allow you to view various basic metrics.

Kindly note, we will introduce new features and improvements on the Cardano Preview network at the same URL ahead of the final release for mainnet.

What's next?

  • Release staking & rewards model for compute providers
  • Implement gateway 
  • Implement external monitoring server to check for uptime and request other internal metrics
  • Different benchmarks for different versions
  • Release detailed documentation for compute providers

Minimum Requirements

ℹ️
Note:
The minimum requirements are subject to revision following additional research.
  • Memory: 2 GB 
  • CPU: 2 Cores
  • Storage: 100 GB
  • OS: Linux (Ubuntu >=22.04)
  • Others:
    • Internet Connection with Public IP and Port Forwarding
    • OpenSSH Server Running
    • Uptime

Evaluation Pass Criteria

* Values are the minimum requirements
* For how values are calculated see benchmarking process described below.

  • CPU: 20 IPS(eps)
  • Memory: 25000 Mbps(Read), 20000 Mbps(Write)*
  • Storage:
    • Sequential: 200Mbps (Read), 200 Mbps(Write)
    • Random: 50 Mbps (Read), 50 Mbps(Write)
  • Network:
    • Ping: 200 ms (max)
    • Bandwidth: 20 Mbps (Download/Read), 20 Mbps (Upload/Write)
    • Latency: 200 ms (max)
  • Notes:
    • Stakepool on Cardano
    • APIs db-sync on Cardano

Benchmarking Process

CPU

Using sysbench to calculate events per second(eps)  which is used as IPS metrics for the cpu. 

For this, we run sysbench cpu stress test for calculating max prime up to 200000 in single threaded mode for max of 10 seconds.

Memory

Using sysbench to calculate read and write bandwidth.

For this, we run a sysbench memory test for both read and write of 20GB data in blocks of 1KB for max 10 seconds in single threaded mode.

Storage

Using fio to calculate sequential and random read/write bandwidth.

For this, we run two fio jobs for sequential and random read/write respectively with 1GB data in blocks of 4KB for max 5 seconds.

Network

Similar to the calculation for storage nodes. 

For ping, we ping the nearest cloudflare server for a minimum of 5 replies and take the average value. 

For bandwidth, we perform upload and download for 4 iterations; in every iteration we upload/download presets values of different byte sizes and calculate the average.

For latency, we use average latency while performing upload/download for bandwidth calculation.

💡
Your feedback matters
Please note, this is the initial testing phase of our first iteration, and your feedback and active participation are crucial. We rely on your contributions to help us build a strong and robust ecosystem. Your insights are invaluable as we strive to enhance and refine our platform.

Visit our special discord for compute test feedback and discussions - link