Nov 16, 2020 | Planned Work
We are seeing the memory usage on our core router in Manchester higher than we would like. To avoid a issue in the future we are planning to reboot this router later this evening,
As there’s no customers directly connected to the router there should be no impact to most customers – However you may see different routes to some locations while the router reboots.
We will update this post when the work is complete
Nov 10, 2020 | Planned Work
We are moving out Gigaclear core link to a new data centre and a faster link. As part of this there will be a brief outage as the config is moved between the two ports.
We plan to complete this work approx 10pm tonight (10/11/2020). We will update this post as the work progresses or if there are any changes needed to this schedule
[update]
This work is now completed – all Gigaclear services are back up and running on the new faster link
Aug 18, 2020 | Uncategorized
We are aware of a current issue at the Harbour Exchange Square (HEX) Data Centre
Note that currently none of our equipment at the site is down – but we have seen a number of suppliers we connect to down and a number of other providers with issues too
We have a small number of leased lines down as a result – most auto swapped over to the backup interconnect in Telehouse – there’s 2-3 that we are looking at to manually move.
At this point – we have had a single update from Equinix to say a fire alarm triggered at 4:30 which is when we saw connections drop – we re waiting for a further update.
At this point – other than the small number of leased lines down (which should be back in the next 60 minutes – there are no other issues. However as we don’t have any details of the cause of the issue – we should assume that connections and network there is ‘at risk’
We will update more as we have it
[Update 6:50]
Equinix have just sent the following update
Equinix IBX Site Staff reports that fire alarm was triggered by the failure of output static switch from Galaxy UPS system supporting levels 1, 2, 3, 4 in building 8/9 at LD8. This has resulted in a loss of power for multiple customers and IBX Engineers are working to resolve the issue.
The next update will be provided in approximately 30 mins.
[Update 9:50]
We have seen that some connections are returning and we expect this to continue – The fire alarm has now been re-instated which allows the Data Centre Staff to work round the issue with the UPS at the site. We are still seeing some links down but at this point in time this is not customer affecting
We will update further as we have it
Jul 14, 2020 | Planned Work
As part of our contract with our specialist contractors the UPS in Avro court is having a regular service on 15/07/2020.
This involves checking the UPS is functioning correctly, battery health etc
This should not cause any interruption to service but as work is being undertaken here there is always a very small risk. Note our UPS is now set as n+1 resilience and only one unit will be worked on at any time so we should retain full UPS availability at all times
Work is expected to take place between 2pm and 5pm on the day. Our own engineers will be on site throughout to monitor the work
We will update this as the work completes
[Update]
This work was completed with no issues
Jul 14, 2020 | Planned Work
As part of a upgrade from multiple 1G Interconnects to multiple 10G Interconnects one of our wholesale supplier will be swapping the optics on two of our links to them during the day on 15th July.
We will be doing our best to move sessions away from the link being worked on – however some customers may see one or two brief PPP drops as the optics are swapped and the link brought back as a 10G Link
Note this does not affect all broadband services. We apologise for this being during the day however we are limited by the supplier to the times available to perform this.
We anticipate the drops being between 10:30am and 2pm. However as mentioned all connections should automatically fail over to an alternative interconnect and re-connect within seconds. If your router does not auto connect please perform a full reboot and if that fails – contact support via the normal routes.
We will update this post as the work progresses
[Update 10:30 15/7]
The supplier has delayed this work until approx 2:30 this afternoon – we will confirm the new time slot once confirmed. We apologise for the change of the time slot
[Update 16:30]
This change completed before 4pm today all Interconnects are now upgraded to 10G
Jun 25, 2020 | Information
One of the two 10G Links into our Huntingdon Data Centre – running from Huntingdon to Manchester is currently down. This is primarily a backup link.
However this does mean that both Huntingdon and Manchester have reduced resilience currently and are ‘at-risk’
The fault has been raised to the carrier and they have already commenced work to restore service
We will update this post as we know more
[Update 23:30]
Update from our suppliers:
Our engineers have arrived on site and performing site checks. We have scheduled for splicers to be dispatched and are awaiting their ETA.
Next Update Due: 01:00 26/06
[Update 0:30]
Update from our suppliers
Our engineers have found a fibre event 6km from the Leicester exchange. Engineers are making they way to the area where there is a confirmed fibre break.
[Update 2:30]
Our field teams have located damage on Aylestone road at the junction of Putney Rd West. Road works have cut and dug up multiple cables causing significant damage to cables and joints. It will take longer than expected to assess the full damage before a fix plan can be drawn out.
[Update 5:20]
Damage Confirmed:
1 x 160f
4 x 24f
4 x 144f
2 x 8f
1 x 96f
2 out of 4 sub ducts damaged
Cause:
Road works. Cables chewed up by a rotary cutting machine. Severe damage has been caused to cables.
Update:
construction have completed installation of x 2 new ducts
Cable gang are onsite.
replacement cable has been ordered.
Plan:
Pull in replacement cables when it arrives into the adjacent joints and resplice to restore.
[Update 6:30]
Replacement Cable will start arrviing between 6:45 – 7:00
Once it has arrived the cable gang can start the process of pulling it in.
[Update 9:00]
Our suppliers are continuing to replace the damaged fibres into the new ducts. Once this is completed they will splice the new fibre
[Update 16:00]
The Civil work needed to install this fibre is complete and the replacement fibre is in the ground. Work to splice the new fibre is ongoing. Given there are over 600 fibre pairs affected this will take several hours to complete.
We will continue to update as we have significant information
[Update 11:00 27/6]
All replacement fibres are in place and splicing has completed
However we are still seeing business impact and have sent ground teams to investigate a Transmode link that has remained down.
Once the investigating engineers return with more information we will be in touch to advise of our next actions.
[Update 12:00 27/6]
We currently have Field engineer red light testing the span of the Transmode link.
Once we have further information we will update you again.
[Update 17:00 27/6]
The Link has been up since 13:00 today – there has been some disturbance however this has been stable since 16:30 today. The suppliers have stated that all work has now been completed on this complex fault. We will monitor the link but we believe this is now stable and we have full resilience to our Huntingdon & Manchester loocations
Apr 28, 2020 | Information
Just to let you know that Merula — along with many others in the industry — will be observing a minutes silence at 11am today to pay tribute to all the frontline workers who have unfortunately lost their lives in the COVID-19 pandemic.
We appreciate your support.
Apr 25, 2020 | Outages
We are aware of some instability on our core London network – this has caused a number of short periods of packet loss this afternoon. Our NOC team are investigating this currently. We will update this as we know more.
This is being worked on as a high priority currently
We apologise for any issues this may cause
[update 19:10]
Investigations are ongoing on the cause of this issue. Our engineers continue to work to understand and resolve the root cause of this issue. We can see BGP sessions drop between our core routers but do not fully understand the actual cause of this. Many users may not notice the issue however there will be occasional packet loss and ‘strange’ routes while this is being worked on
[Update 20:30]
While this issue has improved this is still ongoing. We are seeing BGP between two of our core routers (Telehouse North and East) flapping. This is causing some packet loss / routing drops. now we have isolated the reason for the drops we are working to find why this is ongoing. Initial thoughts where that the routers had stale data and we therefore rebooted them – However the issue while improved has not gone away. We will update this further shortly
[Update 23:00]
The network has now been stable for over an hour and we believe the issue is cleared. we will however monitor this overnight before we close this. It appears that this was down to a forwarding issue on a core Juniper Switch in Telehouse West. After ruling out other (more likely) causes, this switch was reloaded approx 22:00 and so far the routing issues and flaps have calmed.
As the switches showed no obvious errors or log issues we will not close this until we are certain – and will investigate in more detail. However we hope there is no further customer issues as a result of this. We are sorry if this issue affected your service today
[Update 26/4/202 10:00]
The network has been stable overnight following the reboot of the core Juniper Switch. We are continuing to monitor – however we do believe this issue is now resolved.
We are sorry for any issues caused by this problem
Apr 21, 2020 | Information
As part of the certificate renewal on our webmail server – the URL for webmail has changed from https://mx1.merula.co.uk to https://mymail.merula.net
The physical server email accounts etc remain the same – it is just the URL for webmail that has changed. If you prefer to access your email from an email client – you can also change the SMTP/IMAP/POP3 servers to the same name – mymail.merula.net
If you do not have a merula.co.uk email account you are not affected by this change
As ever if you have any questions please contact support via the normal routes
Apr 16, 2020 | Unplanned downtime
There appears to have lost power and/or a switch failure in a single rack in our Huntingdon Data Centre. This dropped approx 2am this morning. This rack houses a small number of Merula and customer servers
We are aware and will investigate and resolve this asap. We are planning to be on site approx 7am and will resolve this issue then
We apologise for any issues this may cause and will update this as soon as we have more details
[update 8:15am]
The issue appears to be related to the switch in the rack – after being offline for approx 90 minutes the switch came back up and connectivity was restored to most servers in the rack. However we are seeing issues with connectivity via a couple of servers in this rack. We have therefore taken the decision to manually reboot the switch to see if this restores service given that the servers themselves look OK and have not rebooted. This will unfortunately result in a loss of connectivity to all services in this rack for a couple of minutes. We will update this as we know more
[Update 8:53]
The switch was rebooted – and the latest saved config has been re-applied. This we believe has restored service to the services we are aware of that had an issue. We are continuing to check for anything else with an issue and are investigating the cause of the switch outage further. We may have to schedule a swap out of the switch if we cannot locate an obvious issue here. However we believe that currently all services in Huntingdon should now be restored. Please do email support if you continue to see any issues
[Update 9:20]
This affected switch appears to have failed again. We will now start swapping this out for a replacement switch. We will have an update within the next 45 minutes
[Update 11:30AM]
The switch was replaced and we believe all services have recovered. We are checking for any remaining issues. If you are are seeing any issues please do raise them with support@merula.net. We will update this further later in the day or if we locate any remaining issues