Need help or advice?
Call us FREE 0800 298 2375

PDU Failure – Single Rack – Avro Court [resolved]

In the early hours of this morning a single PDU failed taking approx 5 servers down in our data centre – including one of our two mail servers (which work as a mirror) and a couple of other customer servers.

The PDU has been replaced and we are just checking that all servers return as they should.

Note that no email etc has been lost here.

We will update this as we have confirmation all servers are back up and operational


[Update 10:30]

All affected services & servers are now back up and running after the PDU replacement – several VMsneeded a manual boot and some services needed a manual start. It’s possible that some older email may take a little while to sync but no email has been lost

We are sorry for the issues here


Manchester Router Reboot – 16/11/2020 22:00

We are seeing the memory usage on our core router in Manchester higher than we would like. To avoid a issue in the future we are planning to reboot this router later this evening,

As there’s no customers directly connected to the router there should be no impact to most customers – However you may see different routes to some locations while the router reboots.

We will update this post when the work is complete


Gigaclear Circuit Planned Works 10/11/2020 [completed]

We are moving out Gigaclear core link to a new data centre and a faster link. As part of this there will be a brief outage as the config is moved between the two ports.

We plan to complete this work approx 10pm tonight (10/11/2020). We will update this post as the work progresses or if there are any changes needed to this schedule



This work is now completed – all Gigaclear services are back up and running on the new faster link


Incident – Equinix Harbour Exchange Square [update 2]

We are aware of a current issue at the Harbour Exchange Square (HEX) Data Centre

Note that currently none of our equipment at the site is down – but we have seen a number of suppliers we connect to down and a number of other providers with issues too

We have a small number of leased lines down as a result – most auto swapped over to the backup interconnect in Telehouse – there’s 2-3 that we are looking at to manually move.

At this point – we have had a single update from Equinix to say a fire alarm triggered at 4:30 which is when we saw connections drop – we re waiting for a further update.

At this point – other than the small number of leased lines down (which should be back in the next 60 minutes – there are no other issues. However as we don’t have any details of the cause of the issue – we should assume that connections and network there is ‘at risk’

We will update more as we have it


[Update 6:50]

Equinix have just sent the following update

Equinix IBX Site Staff reports that fire alarm was triggered by the failure of output static switch from Galaxy UPS system supporting levels 1, 2, 3, 4 in building 8/9 at LD8. This has resulted in a loss of power for multiple customers and IBX Engineers are working to resolve the issue.
The next update will be provided in approximately 30 mins.

[Update 9:50]

We have seen that some connections are returning and we expect this to continue – The fire alarm has now been re-instated which allows the Data Centre Staff to work round the issue with the UPS at the site. We are still seeing some links down but at this point in time this is not customer affecting

We will update further as we have it

UPS Maintenance – 15/07/2020

As part of our contract with our specialist contractors the UPS in Avro court is having a regular service on 15/07/2020.

This involves checking the UPS is functioning correctly, battery health etc

This should not cause any interruption to service but as work is being undertaken here there is always a very small risk. Note our UPS is now set as n+1 resilience and only one unit will be worked on at any time so we should retain full UPS availability at all times

Work is expected to take place between 2pm and 5pm on the day. Our own engineers will be on site throughout to monitor the work

We will update this as the work completes



This work was completed with no issues

Planned Work – Broadband Interconnect Upgrades – 15/07/2020

As part of a upgrade from multiple 1G Interconnects to multiple 10G Interconnects one of our wholesale supplier will be swapping the optics on two of our links to them during the day on 15th July.

We will be doing our best to move sessions away from the link being worked on – however some customers may see one or two brief PPP drops as the optics are swapped and the link brought back as a 10G Link

Note this does not affect all broadband services. We apologise for this being during the day however we are limited by the supplier to the times available to perform this.

We anticipate the drops being between 10:30am and 2pm. However as mentioned all connections should automatically fail over to an alternative interconnect and re-connect within seconds. If your router does not auto connect please perform a full reboot and if that fails – contact support via the normal routes.

We will update this post as the work progresses


[Update 10:30 15/7]

The supplier has delayed this work until approx 2:30 this afternoon – we will confirm the new time slot once confirmed. We apologise for the change of the time slot


[Update 16:30]

This change completed before 4pm today all Interconnects are now upgraded to 10G


Loss of Resilience – Huntingdon – Resolved 27/06/2020

One of the two 10G Links into our Huntingdon Data Centre – running from Huntingdon to Manchester is currently down. This is primarily a backup link.

However this does mean that both Huntingdon and Manchester have reduced resilience currently and are ‘at-risk’

The fault has been raised to the carrier and they have already commenced work to restore service

We will update this post as we know more

[Update 23:30]

Update from our suppliers:

Our engineers have arrived on site and performing site checks. We have scheduled for splicers to be dispatched and are awaiting their ETA.

Next Update Due: 01:00 26/06


[Update 0:30]

Update from our suppliers

Our engineers have found a fibre event 6km from the Leicester exchange. Engineers are making they way to the area where there is a confirmed fibre break.

[Update 2:30]

Our field teams have located damage on Aylestone road at the junction of Putney Rd West. Road works have cut and dug up multiple cables causing significant damage to cables and joints. It will take longer than expected to assess the full damage before a fix plan can be drawn out.

[Update 5:20]
Damage Confirmed:
1 x 160f
4 x 24f
4 x 144f
2 x 8f
1 x 96f
2 out of 4 sub ducts damaged

Road works. Cables chewed up by a rotary cutting machine. Severe damage has been caused to cables.

construction have completed installation of x 2 new ducts
Cable gang are onsite.
replacement cable has been ordered.

Pull in replacement cables when it arrives into the adjacent joints and resplice to restore.

[Update 6:30]

Replacement Cable will start arrviing between 6:45 – 7:00

Once it has arrived the cable gang can start the process of pulling it in.

[Update 9:00]

Our suppliers are continuing to replace the damaged fibres into the new ducts. Once this is completed they will splice the new fibre

[Update 16:00]

The Civil work needed to install this fibre is complete and the replacement fibre is in the ground. Work to splice the new fibre is ongoing. Given there are over 600 fibre pairs affected this will take several hours to complete.

We will continue to update as we have significant information

[Update 11:00 27/6]

All replacement fibres are in place and splicing has completed

However we are still seeing business impact and have sent ground teams to investigate a Transmode link that has remained down.
Once the investigating engineers return with more information we will be in touch to advise of our next actions.

[Update 12:00 27/6]

We currently have Field engineer red light testing the span of the Transmode link.
Once we have further information we will update you again.

[Update 17:00 27/6]

The Link has been up since 13:00 today – there has been some disturbance however this has been stable since 16:30 today. The suppliers have stated that all work has now been completed on this complex fault. We will monitor the link but we believe this is now stable and we have full resilience to our Huntingdon & Manchester loocations


Just to let you know that Merula — along with many others in the industry — will be observing a minutes silence at 11am today to pay tribute to all the frontline workers who have unfortunately lost their lives in the COVID-19 pandemic.

We appreciate your support.

Network Instability [resolved]

We are aware of some instability on our core London network – this has caused a number of short periods of packet loss this afternoon. Our NOC team are investigating this currently. We will update this as we know more.

This is being worked on as a high  priority currently

We apologise for any issues this may cause

[update 19:10]

Investigations are ongoing on the cause of this issue. Our engineers continue to work to understand and resolve the root cause of this issue. We can see BGP sessions drop between our core routers but do not fully understand the actual cause of this. Many users may not notice the issue however there will be occasional packet loss and ‘strange’ routes while this is being worked on

[Update 20:30]

While this issue has improved this is still ongoing. We are seeing BGP between two of our core routers (Telehouse North and East) flapping. This is causing some packet loss / routing drops. now we have isolated the reason for the drops  we are working to find why this is ongoing. Initial thoughts where that the routers had stale data and we therefore rebooted them – However the issue while improved has not gone away. We will update this further shortly

[Update 23:00]

The network has now been stable for over an hour and we believe the issue is cleared. we will however monitor this overnight before we close this. It appears that this was down to a forwarding issue on a core Juniper Switch in Telehouse West. After ruling out other (more likely) causes, this switch was reloaded approx 22:00 and so far the routing issues and flaps have calmed.

As the switches showed no obvious errors or log issues we will not close this until we are certain – and will investigate in more detail. However we hope there is no further customer issues as a result of this. We are sorry if this issue affected your service today

[Update 26/4/202 10:00]

The network has been stable overnight following the reboot of the core Juniper Switch. We are continuing to monitor – however we do believe this issue is now resolved.

We are sorry for any issues caused by this problem

Webmail URL Change – email addresses

As part of the certificate renewal on our webmail server – the URL for webmail has changed from to

The physical server email accounts etc remain the same – it is just the URL for webmail that has changed. If you prefer to access your email from an email client – you can also change the SMTP/IMAP/POP3 servers to the same name –

If you do not have a email account you are not affected by this change

As ever if you have any questions please contact support via the normal routes



The main support number is 0845 330 0666 (geographical 01480 355566)

There’s a second, fallback (geographical) number: 01480 411616. All numbers ring directly at our support centre, manned 24hrs x 365 days a year.

We'd also suggest that all customers subscribe to our mailing list (link above); status messages and updates will be delivered by email.