2020.10.11 - Hardware upgrade

The server running our producer node has been upgraded. We opted for a CPU with a faster base frequency (3.3GHz -> 3.6 GHz), with greater cache (12MB -> 32MB) and we doubled the amount of RAM (32GB -> 64GB). The new server is also geographically closer to our first line of relay servers.

All our relays are now running the latest cardano node version (1.21.1).

2020.09.27 - Update

We have been running a subset of our relays on Cardano Node version 1.20.0 for the past few days. The latest version of the Cardano Node seems stable, we haven't observed any CPU spike or excessive use of memory.

The code running on our Block Producer node has now been updated to version 1.20.0 as well. We also took the opportunity to rotate our KES keys during our node upgrade process.

All our servers are healthy, up and running.

2020.08.08 - Update

The Shelley hard fork was successful.

Cardano is transitioning from a phase in which all the blocks are made by the IOHK servers with the OBFT protocol to another in which all the blocks are made by stake pools running the Praos protocol.

This transition will happen in small steps, every 5 days the percentage of blocks made by the stake pools will be increased by a small factor. The proportion of blocks made by IOHK vs stake pool operators is called the 'd' parameter. Currently we are at d=1 that means that the cardano blockchain is run by the IOHK servers and we will eventually be at d=0 when the IOHK servers will shutdown and all the work will be done by stake pool operators.

The initial plan was to start decrementing the 'd' parameter on August 8. This implied that the first block would have been made by stake pools during epoch 210 (Aug 9-13), the first rewards would have been calculated during the following epoch (Aug 14-18) and made available to delegators on August 18th at the end of epoch 211 (21:44:51 UTC).

Unfortunately, Daedalus 2.0.0 had an issue that caused one third of the stake pools not to show up in the delegation center. This issue has been since fixed with the release of Daedalus 2.0.1 but caused many pools not to be able to collect stake in time.

IOHK decided to postpone the decrement of the 'd' parameter by 1 epoch to be more fair to the stake pool operators initially left out of the game. That means that stake pools will be able to mint blocks starting 5 days after the planned date and the rewards to delegators will also be delayed by the same period of time (from Aug 18 to Aug 23).

On Friday, August 14th more information on the decay rate of the 'd' parameter will be released by IOHK in an official blog post. We will then know by when the system will be fully decentralized and the stake pool will be running at full steam.

Daedalus 2.1.0 is now available and introduces the functionality of redeeming the ITN rewards

2020.07.29 - We are live

LEAF pool has been registered on the blockchain and it is visible in Daedalus 2.0.0. It is now possible to delegate to our pool.

In the following days we will increase the initial pool pledge as soon as the ITN rewards will be redeemed.

2020.07.29 - Update

We are ready for the Shelley hard fork that will take place today at 21:44:51 UTC. LEAF pool will be registered on the blockchain right after the hard fork and will be visible in Daedalus a few hours later.

We really appreciate recognizing all the work we put in the Incentivized Testnet by awarding us the following badges:

leaf pool

2020.07.26 - Update

LEAF pool has been registered on "mainnet_candidate_4", most likely the last of the testnets. We are waiting for the release of a new Daedalus version to fund the pool and increase its pledge.

All the data collected during the incentivized testnet period will be preserved and shown below.

2020.06.01 - Getting ready for the Shelley era

We are reorganizing our hardware infrastructure in preparation for the Haskell testnet and in view of the upcoming Shelley era.

During the Haskell testnet we will test our new software/hardware configuration consisting of 6 nodes (1 master node + 5 relay nodes) running on 4 different data centers.

Our mission is to contribute to the Cardano network's decentralization by running a reliable and independent node. In order to meet this goal we don't want to concentrate all our servers in the hands of a single service provider nor to have all of them located in a single facility.

We will not rely on any AWS managed server. We will instead use three other service providers to manage our servers. The servers will be geographically distributed in 4 countries (Netherlands, Germany, UK, USA). This will help us avoiding a single point of failure by implementing basic concepts of redundancy.

Operations of LEAF stake pool on the Incetivized Testnet will not be impacted by the activities in preparation for the Shelley era.

Update - 03 Mar 2020

Recently, the performance of LEAF Pool touched a few low points.

The pool configuration was substantially unchanged during the last 10 epochs, beside some marginal changes. It looks like the network is no longer as stable as it used to be and this is affecting the performance of our pool.

In Epoch 79 we upgraded to the latest official version of jormungandr (v.0.8.13) but it performed poorly. Half way into Epoch 80 we had to revert back to version 0.8.9.

LEAF Pool relies on two servers: "S1" is located in Frankfurt and has a hexacore processor Intel Xeon E2136 clocked at 4.5GHz (6 cores / 12 threads); "S2" is located in Amsterdam and has two quadcore processors Intel Xeon E5620 clocked at 2.40GHz (8 cores / 16 threads). Both servers have a full-duplex 1Gbit/s internet connection. One server runs the pool while the other runs a passive node and is used to test new configurations and as backup solution should the first server go down.

We have now migrated LEAF pool from S1 to S2 and we will continue monitoring its performance.

Incident Report - 14 Feb 2020

At 01:40am UTC the Jormungandr process running on our node stopped functioning properly.

Unfortunately, our watchdog process design to monitor and restart the node in case of this type of events was not up and running. The watchdog process was stopped for maintanence activities and never restarted.

This was the first major problem we observed since the upgrade to Jormungandr v0.8.9.

In the morning we opted for stopping the node and restarting it with the latest release of Jormungandr v0.8.10. We observed the same problems we already had with v0.8.10-alpha1 and decided to revert back to v0.8.9.

The performance of our node took a big hit in the current epoch. We started Epoch 62 with the lowest slot count we had so far and we lost 12 out of 22 slots due to the node downtime.

The node is now running with the same configuration that gave us near 100% perfomance for the previous epochs and should recover normal operation starting with Epoch 63.

Update - 28 Jan 2020

LEAF pool has been running on jormungandr v0.8.7 for the last 5 days. Pool operators expressed mixed feelings about this particular release in the official Telegram channel. We found it to be as stable as the previous release but we also noticed a different load on the cores of our server.

A few threads of the Jormungandr process started consuming an unusual amount of cpu-time. This is something we never noticed before and we are now monitoring. It could also be related to a few changes we made to the pool configuration, things are moving quite fast. We are updating the node on a weekly basis and tweaking configurations in between slot times.

In the meanwhile, interesting commits are piling up in the Jormungandr repo in preparation for the next release. With v0.8.8 we decided to change our upgrade policy. We will start by upgrading a passive node first, look for possible regressions and then we will update the pool process 24h later.

Update - 21 Jan 2020

The latest release of Jormungandr (v0.8.6) improved the stability of the node, our uptime is now at 6 days and counting.

Unfortunately, it is still not perfect. Issue #1580 is causing us to miss a few blocks. The issue has been marked as 'high priority' and we are confident that the IOHK team will fix it soon. In the meanwhile, the node perfromance will be slightly impacted. Considering that in turn we will have less leader slots assigned to us, there is a possibility that the ROI of our pool will decrease to 8-10% during the next few epochs.

Jormungandr is on a weekly schedule release. We will upgrade our node as soon as a new release is available.

Our first million Ada

We are very proud to announce that at the end of epoch 34 we reached the one million ada mark in rewards for our delegators.

Since upgrading to Jormungandr v0.8.6, our node has been much, much more stable. The node uptime is now at 50 hours and counting and it's serving ~2400 connections.

The way forward

We decided to test fewer server configurations for a longer period of time. The instability of the network introduced high variance in the pool performace, epoch over epoch, even for the same configuration. We are currently collecting more data and trying to define better metrics to assess the server performance, resources utilization and bottlenecks before going back for more optimizations.

When we are happy with the pool performance and our set of tools to monitor and control the stake pool, it will be time to show some love to this website (newsletter, live statistics, alerts, etc...).

Our mission is to contribute to the Cardano network's decentralization by running a reliable and independent node.
And we want to get it right.

Pedal to the metal

After the hardware upgrade of December 21st, we tried several node and kernel configurations. At the end we were not pleased with our virtual server provider and decided to go for a dedicated server solution. On December 28th at the beginning of Epoch 15, LEAF stake pool migrated to the new server.

Simply put, the new server is stupid fast and overspec'd for the task.

We also added a couple of charts at the beginning of the page. Return of investment (ROI) data is straight from, while the performance graph represents the number of blocks signed over the total number of blocks assigned to the stake pool.

With the old server, we had a failure rate of 50-60% (we should also consider that the network was really unstable in those epochs). The new server is making a difference reducing the failure rate to 5-10%.

Our goal is, of course, 100% reliability. We will spend the next two weeks applying incremental changes to the server configuration. Improvements/regressions will be quantified over a two-epoch observation period. We will keep you posted.


The server maintenance activity scheduled on 17:20 UTC on December 21st was successful. In less than 10 minutes we took the server offline, doubled the number of vCPUs and RAM and got back online.


The sense of community

Jormungandr v0.8.4 has been released. The node was updated right at the beginning of Epoch 6, but it didn't solve the fork issues. Practically, every node has been missing assigned blocks.

The general feeling is that all stakepool operators are into this together and are trying their best, no discouragement. It is called Testnet for a reason after all.

Shout out to all those helping out fellow stakepool operators and welcoming newcomers on the official forum, telegram channels and their Reddit posts, tweets, blogs, YouTube videos and live streams.

The long night of Epoch 5

Epoch 5 is almost over. It was not easy.

The forking of the blockchain that we saw happening in the previous days got worse. We still managed to process ~90 blocks during this epoch, but we had to trade in a few hours of sleep to make sure that the node was not running on the wrong side of the chain.

Hopefully things will get better with future releases of Jormungandr.

We had a bumpy start

During Epoch 3 and 4 the Shelley Incentivized Testnet network became partitioned. We had to restart our node in numerous occasions, encountering problems during the bootstrapping phase.

We do apologize for the prolonged downtime of our node. We are closely monitoring the state of the network to minimize the impact of similar events should they occur again.

In the next weeks, we will work to establish our presence on social media channels and setup an email contact for support requests.

In the meantime, please buckle up and enjoy the ride :)