Ongoing DDoS attack against our network.

23:30 Again, our apologies

We continue to undertake remedial work to mitigate this ongoing attack.

We will update here as usual.

————

We are currently seeing a new, large-scale DDoS attack against our IP range. We are working to mitigate this but some services are being affected, with packet loss, routing failures or intermittent outages. Some email delivery will be queued until this is resolved.

We will update here as usual.

13:06 UPDATE

We are mitigating a large portion of this attack traffic but currently, the transit links remain saturated which is causing the current ongoing problems. We continue to work to resolve this as quickly as possible & apologise for the ongoing inconvenience caused.

13:44 UPDATE

We are seeing most services recovering. The attack target remains offline but we believe that this incident is now contained. We apologise again for this interruption in service. If you are still seeing issues, please restart your equipment. Tickets can be raised now in the normal manner & the support line remains ready to assist.

17:43 UPDATE

The offsite server that hosts the status NOC site went down during the afternoon. Purely coincidental, but it meant we weren’t able to access it to add more frequent updates. It’s now back and we’ll update the status on the DDoS attack issues shortly. Our apologies that this wasn’t available when it was needed the most.

10th April 2021 – internet issues [update]

We are currently seeing a large scale DDoS attack against our IP range.

This will lead to significant packet loss and access issues to our customers. Our NOC team are already at work to mitigate this. We will post a further update as soon as we have it

[Update 11/04/21 – 10:00am]
We believe the issue cleared shortly after 7pm yesterday. We are still monitoring this closely however we do not believe there is currently any ongoing customer impact

ISSUE: Virgin Media circuits – RESOLVED

We have seen alerts that some circuits from Virgin Media have dropped. We believe this is a Virgin Media issue but are currently investigating.
 
We will update this status as soon as we know more and within the hour at the latest.
 
UPDATE: 16:09
Virgin have confirmed an issue at one of their hub sites affecting some parts of Hertfordshire. Senior engineers and the Core Incident team are working on this at the moment.
 
We will have another update — if it’s not restored — within approx 30 minutes.
 
UPDATE: 16:20
We are seeing the affected circuits back on line – at this point we don’t have an ‘all clear’ so they should be seen at risk – but we hope this is resolved
 
RESOLVED 16:20
We have received the following update from Virgin Media
Please be advised that @ 16:22
Restoration Details:IOM Card 2 and MDA Card 1 remotely reseated on T0090 Luton Metnet 2a.
restoring all services.
 
This appears to have been a card failure at one of the Virgin Media Hub Sites – we apologise for the issues that occured here

Rack Issue – Huntingdon 16/4/2020 [update]

There appears to have lost power and/or  a switch failure in a single rack in our Huntingdon Data Centre. This dropped approx 2am this morning. This rack houses a small number of Merula and customer servers

We are aware and will investigate and resolve this asap. We are planning to be on site approx 7am and will resolve this issue then

We apologise for any issues this may cause and will update this as soon as we have more details

 

[update 8:15am]

The issue appears to be related to the switch in the rack – after being offline for approx 90 minutes the switch came back up and connectivity was restored to most servers in the rack. However we are seeing issues with connectivity via a couple of servers in this rack. We have therefore taken the decision to manually reboot the switch to see if this restores service given that the servers themselves look OK and have not rebooted. This will unfortunately result in a loss of connectivity to all services in this rack for a couple of minutes. We will update this as we know more

[Update 8:53]

The switch was rebooted – and the latest saved config has been re-applied. This we believe has restored service to the services we are aware of that had an issue. We are continuing to check for anything else with an issue and are investigating the cause of the switch outage further. We may have to schedule a swap out of the switch if we cannot locate an obvious issue here. However we believe that currently all services in Huntingdon should now be restored. Please do email support if you continue to see any issues

 

[Update 9:20]

This affected switch appears to have failed again. We will now start swapping this out for a replacement switch. We will have an update within the next 45 minutes

[Update 11:30AM]

The switch was replaced and we believe all services have recovered. We are checking for any remaining issues. If you are are seeing any issues please do raise them with support@merula.net. We will update this further later in the day or if we locate any remaining issues

UPDATE: Network outage 3rd Feb 2020 [resolved]

4th Feb: the supplier network resolved all oustabnding issues on this extended outage. Apologies again to anyone affected.

10.35AM UPDATE: this problem which affects a large number of FTTx lines is still being worked on by Openreach engineers. Latest estimate for the work to be completed is 18.30 tonight. We’ll continue to update here as and when we hear anymore news.

———

We are aware that one of our suppliers has a major service outage. The supplier engineers have no ETA to fix as yet. We will; update as we hear more. This will affect ADSL & FTTC lines. Our apologies for this outage.

 

Line affected:

Start: 03/02/2020 04:15 Next update / Cleared: 03/02/2020 09:30 (estimated)
Raised: 03/02/2020 04:15 Cleared Reason: N/A
Detected: 03/02/2020 04:15 Exchange Name: KINGS LANGLEY, LITTLE GADDESDEN, BOLSOVER, STAVELEY, HOLMEWOOD, OLD WHITTINGTON, CLAY CROSS, HOLYMOORSIDE, BASLOW, STONY STRATFORD, HANSLOPE, YARDLEY GOBION, CROXTON, CAXTON, LAISTERDYKE, DUDLEY HILL, UNDERCLIFFE, GRANTON, SOWERBY BRIDGE, ILLINGWORTH, CALDER VALLEY, WOMBWELL,
Incident Headline: Service affecting outage – Loss of Service
Incident Details: Our supplier engineers are working on the fault.
Area Codes: 01131 01132 01133 01138 01162 01173 01179 01212 01213 01214 01215 01216 01217 01223 01226 01246 01257 01274 01275 01311 01312 01313 01314 01315 01316 01317 01332 01422 01442 01446 01454 01480 01509 01773 01827 01908 01923 01926 01942 01954 02010 02032 02033 02081 02082 02083 02084 02085 02086 02087 02089 02476 02911 02920 02921 02922

 

Switch Reboot – London Harbour Exchange Square

One of our core switches in Harbour Exchange Square had a routing issue about 14:20 today – after our team checked the switch it was decided the fastest resolution was to perform a routing engine switchover. We would normally plan this out of hours – However as this was causing issues to customers the decision was taken to perform this immediately.

This affected all services directly connected or routed through this switch

The switch reboot has now completed and service should be restored to all locations and services.

If there is still an ongoing issue for any services please report them to support in the normal way

We apologise for the issues this may have caused