2021.04.20 - Update
Since the last update several maintenance activities were performed on our servers.
The latest linux kernel security patches were applied to all the servers.
All our nodes were updated to version 1.26.1: The 1.26.1 update improved significantly the performance of the nodes reducing CPU and RAM usage and solving the issue of "missed slots" that caused our pool to lose some blocks on epoch 257.
A new server has been ordered and set up: an old server has been decommissioned and one of our private relays has been migrated to the new hardware (AMD Ryzen 5 3600 @3.8GHz, 6 Cores / 12 Threads, 64GB RAM).
On Sunday, April 18th, IOHK issued an announcement asking all the pool operators to update cardano nodes to version 1.26.2 before the start of epoch 261 (April 20th at 21:44:51 UTC). Version 1.26.1 introduced an issue that caused all block producer nodes to perform extra calculations at the epoch boundary, pausing block generation for 25 minutes on the entire network. We have upgraded our nodes to version 1.26.2 soon after IOHK request and we are ready for the next epoch boundary.
2021.04.05 - Update
At the end of the current epoch (257) LEAF pool will generate less rewards than on average due to the loss of some blocks.
Recently the Cardano network has not been running as stable as we would like. During the last 4 days we tracked 200 fork events analyzing our node logs.
Coincidentally this is the first epoch with full decentralization in which all the blocks are made by stake pool operators. We don't think this is the cause for the performance loss. More likely there are underperforming nodes in the network (nodes with high propagation times) that are disrupting block production for the nodes that are expected to create blocks after them.
There is an open GitHub issue regarding this matter: https://github.com/input-output-hk/ouroboros-network/issues/2913.
IOG has acknowledged the problem and will be working on it.
In the meanwhile we have decided to start working immediately on hardware upgrades that were planned for May/June. In the following days we are going to upgrade the hardware of three of our servers and improve on our network topology. We will keep you posted on the progress of the hardware upgrades.
We are also testing the new cardano node version 1.26.1 on Testnet, so far so good. We are ready to upgrade our nodes on Mainnet as soon as it will be officially released, hoping it will mitigate some of the problems we have observed.
2021.03.01 - 1 Million Ada
Since the rollout of the Shelley era in August last year we have redistributed 1'000'000 ada in rewards to our delegators.
Coincidentally, we hit this milestone exactly on the day of the successful hard fork of the Mary era :)
On this note we would like to welcome aboard all our new delegators and thank you all for your messages of support!
2021.02.13 - Update
All our servers are now running the latest cardano node version. We are ready for the Mary hard fork event.
2021.02.07 - The "Mary" hardfork
The rollout of Goguen era functionalities continues.
The 16th of December 2020 the "Allegra" hardfork introduced the token locking mechanism to the cardano blockchain, paving the road for project Catalyst and the on-chain voting on Cardano.
The next hardfork event - codenamed "Mary" - will introduce the long-awaited support for native tokens and smart contracts. More info on this topic in the official blog post: https://iohk.io/en/blog/posts/2021/02/04/native-tokens-to-bring-new-utility-to-life-on-cardano/
At the end of January we upgraded all our nodes running on the Testnet. The 3rd of February the Testnet has gone through the hardfork event without any issue.
During Epoch 247 we will upgrade the rest of our nodes on the Mainnet servers to be ready for the hardfork event that will occur later this month (the date has not been officially released yet, but we are targeting the 22nd of February).
2021.01.30 - Update
We are getting ready for the hard fork event that will occur in late February. In a few days we will give you a more datailed update.
2021.01.25 - Epoch 243 summary
Work on our backend code continues. We are also moving from updating the main page at the end of every epoch to updating it whenever we will have useful information to share on our stake pool or the Cardano project in general.
2021.01.20 - Epoch 242 summary
Forcasted rewards for next epoch will be back to 5%. Once again, we have no control on the number of blocks assigned to our pool for each epoch.
2021.01.15 - Epoch 241 summary
Forcasted rewards for next epoch are around 4%.
2021.01.10 - Epoch 240 summary
Thank you for your suggestions. We will address them as soon as the work on our backend code moves to production.
2021.01.05 - Epoch 239 summary
The work on the code we use to manage our servers continues. We are making changes to our scripts to facilitate the migration of nodes from server to server. This will come in handy when the hardware requirements for running a cardano node will increase. More and more functionalities will be added to the Cardano blockchain and Stake Pool Operators will be able to opt-in to performe the most demanding tasks to run the network. Being able to easily migrate nodes between servers will enable us to scale up the processing power of our stake pool as soon as we need it.
2020.12.31 - Epoch 238 summary
The last epoch of the year was very quite. ROI for the next epoch should be back above 5%.
Wishing everyone a happy new year!
2020.12.26 - Epoch 237 summary
During the current epoch we have done some work on our scripts and we have started preparing a new server for the Testnet.
On the 28th of December we will shut down our first dedicated server that we ordered in the Netherlands exactly one year ago. We already have a new server up and running in Germany to replace the one we are going to retire.
2020.12.21 - Epoch 236 summary
The first epoch after the Allegra hard fork was quite smooth.
The only noticeable change was a slightly higher than usual CPU usage due to other nodes in the network that didn't update to the lastest version of cardano node. In the next days we will start banning from our relay servers all the outdated network nodes. Bans of the nodes IP addresses will be temporary, while we wait for IOHK to release a new node version able to mitigate this problem.
2020.12.16 - Epoch 235 summary
All our nodes have been updated to v1.24.2 for the Allegra hard fork that just occurred at the beginning of the curent epoch.
For the current epoch only 20 blocks have been assigned to our pool. This will translate in rewards that will be roughly 30% lower than average.
2020.12.12 - One year of LEAF pool
Exactly one year ago the stake pool registration was submitted to the Cardano Foundation ITN stake pool registry.
Today marks one year of continuous operation of LEAF. During all this time we have learned a great deal, going from the Incentivized testnet, through several Shelley testnets and evenutally to Shelley Mainnet. We have never stopped expanding and redesigning our hardware infrastructure and software tools.
We would like to thank all of you for making this possible by supporting us with your delegations.
We are ready and excited for the road ahead.
2020.12.11 - Epoch 234 summary
We are upgrading all our nodes to version 1.24.2 in preparation for the Allegra hard fork that will occur on December 16th.
Delegators don't have to take any action for the upcoming hard fork.
2020.12.06 - Epoch 233 summary
All our nodes are running on cardano-node v1.23.0.
2020.12.01 - Epoch 232 summary
A warm welcome to all our new delegators.
A new graph was added at the top of the page to show the amount of the pool stake and the saturation point.
This epoch we have worked mostly on our servers. We are upgrading the version of the GHC compiler to 8.10.2 in all our servers to be ready for the next cardano-node releases. Two of our relays have been upgraded to cardano-node v1.23.0 and we are now comparing their metrics with our other nodes running v1.21.1. At the end of epoch 233, we will proceed upgrading the remaining relay nodes and the block producer node to the latest version.
2020.11.26 - Epoch 231 summary
The faulty SSD has been repaced. All our servers are up and running.
2020.11.21 - Epoch 230 summary
We added a form at the top of the page to receive feedback and suggestions. It is an additional option to contact us without the need of sharing your personal information. Our goal is to add more functionalities to the website in the following weeks and have a general redesign of the page by March 2021.
For the seconds epoch in a row, the output of the leaderLogs script used to predict the number of expected blocks was not accurate. The script predicted one block more than those actually assigned to LEAF pool. We want to be transparent and share as much information as we can with our delegators, but we also don't want to share inaccurate data. We will stop providing an estimation for the number of blocks in the following epoch till those predictions will get more accurate.
On Saturday, Nov. 21 we had an hardware failure in one of our servers. The operations of LEAF pool were not impacted since this failure happened in one of our secondary relays. As of today, we manage 9 relay nodes: 5 are public and 4 are private (not listed in pool certificate) for security reasons. We have designed our hardware architecture to be resilient to this type of events and it payed off. The relay node that went offline was restared on a backup server while we were in contact with the support team of the hosting company. It turned out to be a SSD failure and we are now waiting for the disk to be replaced. The SLA (Service Level Agreement) of the hosting company states that the downtime due to hardware replacement should last less than 6 hours since the moment the problem has been confirmed. We will evaluate the option of migrating that server elsewhere based on their reaction time.
2020.11.16 - Epoch 229 summary
This epoch LEAF pool minted 14 of the 15 expected blocks accordingly to the leaderLog scripts. Pooltool is not reporting any orphan (non propagated) block for LEAF pool, we believe the number of expected blocks was not correct as it already happened once when LEAF pool minted one block more then predicted by the script.
We have now pull the latest version of the leaderLog scripts from github hoping that it will give us more precise block schedules.
LEAF pool is scheduled for 27 leader slots next epoch (120% of expected blocks).
2020.11.11 - Epoch 228 summary
This epoch LEAF pool minted 23/23 blocks and is scheduled for 15 leader slots next epoch (65% of expected blocks).
2020.11.06 - Epoch 227 summary
Welcome to all new delegators. This Epoch our stake increased of 3M ada.
Yesterday IOG announced that the 6th of December the k parameter will be raised to 500.
The k parameter defines the desired number of stake pools in the system at equilibrium.
It works by setting a threshold value in the pool stake above which the delegators rewards start decreasing.
The expected effect of rising the k parameter is the migration of delegators from big saturated pools to smaller pools.
The new k parameter will lower the saturation point of a pool to 64M ada (LEAF pool is currently at 37M ada with room to grow).
Delegators who desire to keep staking with LEAF pool don't have to take any action.
Here is the official blog post for more information:
This epoch LEAF pool minted 20/20 blocks and is scheduled for 23 leader slots next epoch (105% of expected blocks).
2020.11.01 - Epoch 226 summary
At the beginning of every epoch stake pools are randomly extracted to have the chance of minting a block.
The probability for a stake pool to be selected as "slot leader" is directly proportional to its stake.
That said, it is still a random process and the number of blocks assigned to a stake pool can vary considerably epoch over epoch.
During Epoch 226 LEAF was elected leader for 13 slots only.
We have no control on how many slots get assigned to our pool but - thanks to the hard work of some community members - we can get and share that information at the beginning of every epoch.
Credits for the scripts that calculate the slots lead by a stake pool go to Andrew Westberg [BCSH], Papacarp [LOVE] and all other community members that contributed to this effort with variations of the scripts.
In Epoch 227, LEAF pool is scheduled for 20 leader slots (95% of the expected slots).
2020.10.11 - Hardware upgrade
The server running our producer node has been upgraded. We opted for a CPU with a faster base frequency (3.3GHz -> 3.6 GHz), with greater cache (12MB -> 32MB) and we doubled the amount of RAM (32GB -> 64GB). The new server is also geographically closer to our first line of relay servers.
All our relays are now running the latest cardano node version (1.21.1).
2020.09.27 - Update
We have been running a subset of our relays on Cardano Node version 1.20.0 for the past few days. The latest version of the Cardano Node seems stable, we haven't observed any CPU spike or excessive use of memory.
The code running on our Block Producer node has now been updated to version 1.20.0 as well. We also took the opportunity to rotate our KES keys during our node upgrade process.
All our servers are healthy, up and running.
2020.08.08 - Update
The Shelley hard fork was successful.
Cardano is transitioning from a phase in which all the blocks are made by the IOHK servers with the OBFT protocol to another in which all the blocks are made by stake pools running the Praos protocol.
This transition will happen in small steps, every 5 days the percentage of blocks made by the stake pools will be increased by a small factor.
The proportion of blocks made by IOHK vs stake pool operators is called the 'd' parameter. Currently we are at d=1 that means that the cardano blockchain is run by the IOHK servers and we will eventually be at d=0 when the IOHK servers will shutdown and all the work will be done by stake pool operators.
The initial plan was to start decrementing the 'd' parameter on August 8. This implied that the first block would have been made by stake pools during epoch 210 (Aug 9-13), the first rewards would have been calculated during the following epoch (Aug 14-18) and made available to delegators on August 18th at the end of epoch 211 (21:44:51 UTC).
Unfortunately, Daedalus 2.0.0 had an issue that caused one third of the stake pools not to show up in the delegation center.
This issue has been since fixed with the release of Daedalus 2.0.1 but caused many pools not to be able to collect stake in time.
IOHK decided to postpone the decrement of the 'd' parameter by 1 epoch to be more fair to the stake pool operators initially left out of the game.
That means that stake pools will be able to mint blocks starting 5 days after the planned date and the rewards to delegators will also be delayed by the same period of time (from Aug 18 to Aug 23).
On Friday, August 14th more information on the decay rate of the 'd' parameter will be released by IOHK in an official blog post.
We will then know by when the system will be fully decentralized and the stake pool will be running at full steam.
Daedalus 2.1.0 is now available and introduces the functionality of redeeming the ITN rewards
2020.07.29 - We are live
LEAF pool has been registered on the blockchain and it is visible in Daedalus 2.0.0. It is now possible to delegate to our pool.
In the following days we will increase the initial pool pledge as soon as the ITN rewards will be redeemed.
2020.07.29 - Update
We are ready for the Shelley hard fork that will take place today at 21:44:51 UTC.
LEAF pool will be registered on the blockchain right after the hard fork and will be visible in Daedalus a few hours later.
We really appreciate pooltool.io recognizing all the work we put in the Incentivized Testnet by awarding us the following badges:
2020.07.26 - Update
LEAF pool has been registered on "mainnet_candidate_4", most likely the last of the testnets. We are waiting for the release of a new Daedalus version to fund the pool and increase its pledge.
All the data collected during the incentivized testnet period will be preserved and shown below.
2020.06.01 - Getting ready for the Shelley era
We are reorganizing our hardware infrastructure in preparation for the Haskell testnet and in view of the upcoming Shelley era.
During the Haskell testnet we will test our new software/hardware configuration consisting of 6 nodes (1 master node + 5 relay nodes) running on 4 different data centers.
Our mission is to contribute to the Cardano network's decentralization by running a reliable and independent node. In order to meet this goal we don't want to concentrate all our servers in the hands of a single service provider nor to have all of them located in a single facility.
We will not rely on any AWS managed server. We will instead use three other service providers to manage our servers. The servers will be geographically distributed in 4 countries (Netherlands, Germany, UK, USA). This will help us avoiding a single point of failure by implementing basic concepts of redundancy.
Operations of LEAF stake pool on the Incetivized Testnet will not be impacted by the activities in preparation for the Shelley era.
Update - 03 Mar 2020
Recently, the performance of LEAF Pool touched a few low points.
The pool configuration was substantially unchanged during the last 10 epochs, beside some marginal changes. It looks like the network is no longer as stable as it used to be and this is affecting the performance of our pool.
In Epoch 79 we upgraded to the latest official version of jormungandr (v.0.8.13) but it performed poorly. Half way into Epoch 80 we had to revert back to version 0.8.9.
LEAF Pool relies on two servers: "S1" is located in Frankfurt and has a hexacore processor Intel Xeon E2136 clocked at 4.5GHz (6 cores / 12 threads); "S2" is located in Amsterdam and has two quadcore processors Intel Xeon E5620 clocked at 2.40GHz (8 cores / 16 threads). Both servers have a full-duplex 1Gbit/s internet connection. One server runs the pool while the other runs a passive node and is used to test new configurations and as backup solution should the first server go down.
We have now migrated LEAF pool from S1 to S2 and we will continue monitoring its performance.
Incident Report - 14 Feb 2020
At 01:40am UTC the Jormungandr process running on our node stopped functioning properly.
Unfortunately, our watchdog process design to monitor and restart the node in case of this type of events was not up and running. The watchdog process was stopped for maintanence activities and never restarted.
This was the first major problem we observed since the upgrade to Jormungandr v0.8.9.
In the morning we opted for stopping the node and restarting it with the latest release of Jormungandr v0.8.10. We observed the same problems we already had with v0.8.10-alpha1 and decided to revert back to v0.8.9.
The performance of our node took a big hit in the current epoch. We started Epoch 62 with the lowest slot count we had so far and we lost 12 out of 22 slots due to the node downtime.
The node is now running with the same configuration that gave us near 100% perfomance for the previous epochs and should recover normal operation starting with Epoch 63.
Update - 28 Jan 2020
LEAF pool has been running on jormungandr v0.8.7 for the last 5 days. Pool operators expressed mixed feelings about this particular release in the official Telegram channel. We found it to be as stable as the previous release but we also noticed a different load on the cores of our server.
A few threads of the Jormungandr process started consuming an unusual amount of cpu-time. This is something we never noticed before and we are now monitoring. It could also be related to a few changes we made to the pool configuration, things are moving quite fast. We are updating the node on a weekly basis and tweaking configurations in between slot times.
In the meanwhile, interesting commits are piling up in the Jormungandr repo in preparation for the next release. With v0.8.8 we decided to change our upgrade policy. We will start by upgrading a passive node first, look for possible regressions and then we will update the pool process 24h later.
Update - 21 Jan 2020
The latest release of Jormungandr (v0.8.6) improved the stability of the node, our uptime is now at 6 days and counting.
Unfortunately, it is still not perfect. Issue #1580 is causing us to miss a few blocks. The issue has been marked as 'high priority' and we are confident that the IOHK team will fix it soon. In the meanwhile, the node perfromance will be slightly impacted. Considering that in turn we will have less leader slots assigned to us, there is a possibility that the ROI of our pool will decrease to 8-10% during the next few epochs.
Jormungandr is on a weekly schedule release. We will upgrade our node as soon as a new release is available.
Our first million Ada
We are very proud to announce that at the end of epoch 34 we reached the one million ada mark in rewards for our delegators.
Since upgrading to Jormungandr v0.8.6, our node has been much, much more stable. The node uptime is now at 50 hours and counting and it's serving ~2400 connections.
The way forward
We decided to test fewer server configurations for a longer period of time. The instability of the network introduced high variance in the pool performace, epoch over epoch, even for the same configuration.
We are currently collecting more data and trying to define better metrics to assess the server performance, resources utilization and bottlenecks before going back for more optimizations.
When we are happy with the pool performance and our set of tools to monitor and control the stake pool, it will be time to show some love to this website (newsletter, live statistics, alerts, etc...).
Our mission is to contribute to the Cardano network's decentralization by running a reliable and independent node.
And we want to get it right.
Pedal to the metal
After the hardware upgrade of December 21st, we tried several node and kernel configurations. At the end we were not pleased with our virtual server provider and decided to go for a dedicated server solution. On December 28th at the beginning of Epoch 15, LEAF stake pool migrated to the new server.
Simply put, the new server is stupid fast and overspec'd for the task.
We also added a couple of charts at the beginning of the page. Return of investment (ROI) data is straight from adapools.org, while the performance graph represents the number of blocks signed over the total number of blocks assigned to the stake pool.
With the old server, we had a failure rate of 50-60% (we should also consider that the network was really unstable in those epochs). The new server is making a difference reducing the failure rate to 5-10%.
Our goal is, of course, 100% reliability. We will spend the next two weeks applying incremental changes to the server configuration. Improvements/regressions will be quantified over a two-epoch observation period. We will keep you posted.
The server maintenance activity scheduled on 17:20 UTC on December 21st was successful. In less than 10 minutes we took the server offline, doubled the number of vCPUs and RAM and got back online.
The sense of community
Jormungandr v0.8.4 has been released. The node was updated right at the beginning of Epoch 6, but it didn't solve the fork issues. Practically, every node has been missing assigned blocks.
The general feeling is that all stakepool operators are into this together and are trying their best, no discouragement. It is called Testnet for a reason after all.
Shout out to all those helping out fellow stakepool operators and welcoming newcomers on the official forum, telegram channels and their Reddit posts, tweets, blogs, YouTube videos and live streams.
The long night of Epoch 5
Epoch 5 is almost over. It was not easy.
The forking of the blockchain that we saw happening in the previous days got worse. We still managed to process ~90 blocks during this epoch, but we had to trade in a few hours of sleep to make sure that the node was not running on the wrong side of the chain.
Hopefully things will get better with future releases of Jormungandr.
We had a bumpy start
During Epoch 3 and 4 the Shelley Incentivized Testnet network became partitioned. We had to restart our node in numerous occasions, encountering problems during the bootstrapping phase.
We do apologize for the prolonged downtime of our node. We are closely monitoring the state of the network to minimize the impact of similar events should they occur again.
In the next weeks, we will work to establish our presence on social media channels and setup an email contact for support requests.
In the meantime, please buckle up and enjoy the ride :)