lines. We are seeing some lines return but not all and will update here as we get more news.
We are aware of an issue with one of our Juniper Core Routers in Harbour Exchange Square.
The vast majority of services have routed round this and are not affected. As small number of directly connected customer may be seeing an issue
Our Engineers are working to restore service to this router. We will update this as we know more
A remote hands reboot of the core router has not restored service. Our Engineers are therefore en-route to the data centre to investigate and restore service – we expect them to be on site by approx 20:15 this evening. We will update as soon as they arrive.
Our Engineer is approx 20 minutes from the Data Centre they have collected spare parts on route in case any hardware needs swapping out. Next update by 21:15
Our engineer is on site and working on the core router. We can see the file system is corrupt which means the router did not boot when power cycled. We are working to restore this asap. In parallel we have moved some lines onto the switch on site which is working. We are moving other links as well to restore their service. Next update within an hour
The Router is now booted after the disk corruption was cleared. Config is being copied back & applied. We hope to have service restored very shortly. Next update by 23:30
The router is now back and passing traffic – we have not yet enabled all peers so some traffic may take a slightly different route to ‘normal’ but no customer services are now impacted. As part of the recovery the software upgrade planned for this router has been applied so the planned work for that upgrade is no longer needed
During the checking process we have detected the need for a reboot to ensure the router and config are updated completely. The reboot is in progress and we will expect this to complete in 10-20 minutes
The reboot cleared the router alarm and all routing is now back up. Monitoring is now showing all links working as they should be. There will be some further checks to complete – however we do not expect any further issues. We apologise for any customers affected by the issues this evening.
Once again, please accept our apologies for the problems you’ve seen over the previous couple of days. We realise that this has caused you all serious issues and for that, we’re very sorry.
Various internal changes have been implemented over the last 48 hours and currently, we believe that the network and associated services are now stable and will remain that way. We continue to monitor the situation closely to ensure that our network remains stable and there’s no further impact to your services.
Please email us in the normal way if you have any question or concerns. Thanks again for your support through this incident.
23:30 Again, our apologies
We continue to undertake remedial work to mitigate this ongoing attack.
We will update here as usual.
We are currently seeing a new, large-scale DDoS attack against our IP range. We are working to mitigate this but some services are being affected, with packet loss, routing failures or intermittent outages. Some email delivery will be queued until this is resolved.
We will update here as usual.
We are mitigating a large portion of this attack traffic but currently, the transit links remain saturated which is causing the current ongoing problems. We continue to work to resolve this as quickly as possible & apologise for the ongoing inconvenience caused.
We are seeing most services recovering. The attack target remains offline but we believe that this incident is now contained. We apologise again for this interruption in service. If you are still seeing issues, please restart your equipment. Tickets can be raised now in the normal manner & the support line remains ready to assist.
The offsite server that hosts the status NOC site went down during the afternoon. Purely coincidental, but it meant we weren’t able to access it to add more frequent updates. It’s now back and we’ll update the status on the DDoS attack issues shortly. Our apologies that this wasn’t available when it was needed the most.
We are currently seeing a large scale DDoS attack against our IP range.
This will lead to significant packet loss and access issues to our customers. Our NOC team are already at work to mitigate this. We will post a further update as soon as we have it
[Update 11/04/21 – 10:00am]
We believe the issue cleared shortly after 7pm yesterday. We are still monitoring this closely however we do not believe there is currently any ongoing customer impact
Please be advised that @ 16:22Restoration Details:IOM Card 2 and MDA Card 1 remotely reseated on T0090 Luton Metnet 2a.restoring all services.