Trace processing is currently delayed, because of high load of an Elasticsearch node in our cluster. We are working on a fix.
Update 15:39: Measurement/monitoring data is now affected as well with a lag of around 5 minutes.
Update 15:40: Hoster confirmed that the underlying hardware node to our Elasticsearch server is affected and they are taking measures to get it back to normal. Load is decreasing slightly so that processing should pick up in speed in a few minutes.
Update 15:55: Load is back to normal, processing has now increased and will process all outstanding traces from the queue as quick as possible.
Update 16:14 The traces backlog has been processed and data is now up to date again with the most recent ingested data.
We are sorry for the inconvenience not having access to the most recent traces. To avoid this problem in the future, we are working with our hoster to find a solution when the hardware node is overloading.