Reboot – THW Core Switch 23/12/2021 [resolved]

We are seeing some VLANs on our core network flap when they pass through a core switch in Telehouse West.

We may need to reset or reload this switch to clear this issue. This MAY cause some instability to the network for a short period and services hosted in Telehouse West may drop tor 15-20 minutes

We will update this as investigations continue

[update 1 – 23:30]

We have reloaded one of our core Juniper switches and are monitoring for stability.

[Update 24/12 9:44]

We have monitored the network after the switch reload at 11pm last night – and this has stayed stable with no further flaps. While the switch had no error logs during the incident it was randomly blocking traffic on some VLANs including a backup vlan between two core routers in Telehouse East and North. This was causing some packet loss / drops as the traffic was switching between links. We apologise for the issues seen however while we are aware this issue started during the day we needed to perform the reboot out of hours since it caused a 10-15 minute outage to services directly connected.

UPDATE: Outage on some broadband lines [13/10/2021]

14:46 UPDATE
The supplier advises their issues should now have been resolved, however it may take some time for all circuits to reconnect due to the increased load of subscribers attempting to login to their Radius. Our apologies again for this supplier outage.
13.20 UPDATE
Good afternoon.
The supplier had a power outage at one of their data centres that’s affected their network transit and backhaul and therefore some

lines. We are seeing some lines return but not all and will update here as we get more news.

————-
Good afternoon.
We are aware of an issue within one of our suppliers we use for these lines. At this point we don’t have an announcement or full details but we are aware of this. The issue is also affecting their portal which we access and their status page isn’t working.
We are trying to get an update on this which we will pass on as soon as we can.
Note: this issue is not on the Merula network but within the supplier network. Our apologies to this circuits affected.

UPDATE: 14th April 20:45

Once again, please accept our apologies for the problems you’ve seen over the previous couple of days. We realise that this has caused you all serious issues and for that, we’re very sorry.

Various internal changes have been implemented over the last 48 hours and currently, we believe that the network and associated services are now stable and will remain that way. We continue to monitor the situation closely to ensure that our network remains stable and there’s no further impact to your services.

Please email us in the normal way if you have any question or concerns. Thanks again for your support through this incident.

Ongoing DDoS attack against our network.

23:30 Again, our apologies

We continue to undertake remedial work to mitigate this ongoing attack.

We will update here as usual.

————

We are currently seeing a new, large-scale DDoS attack against our IP range. We are working to mitigate this but some services are being affected, with packet loss, routing failures or intermittent outages. Some email delivery will be queued until this is resolved.

We will update here as usual.

13:06 UPDATE

We are mitigating a large portion of this attack traffic but currently, the transit links remain saturated which is causing the current ongoing problems. We continue to work to resolve this as quickly as possible & apologise for the ongoing inconvenience caused.

13:44 UPDATE

We are seeing most services recovering. The attack target remains offline but we believe that this incident is now contained. We apologise again for this interruption in service. If you are still seeing issues, please restart your equipment. Tickets can be raised now in the normal manner & the support line remains ready to assist.

17:43 UPDATE

The offsite server that hosts the status NOC site went down during the afternoon. Purely coincidental, but it meant we weren’t able to access it to add more frequent updates. It’s now back and we’ll update the status on the DDoS attack issues shortly. Our apologies that this wasn’t available when it was needed the most.

10th April 2021 – internet issues [update]

We are currently seeing a large scale DDoS attack against our IP range.

This will lead to significant packet loss and access issues to our customers. Our NOC team are already at work to mitigate this. We will post a further update as soon as we have it

[Update 11/04/21 – 10:00am]
We believe the issue cleared shortly after 7pm yesterday. We are still monitoring this closely however we do not believe there is currently any ongoing customer impact

Loss of Resilience – Huntingdon – Resolved 27/06/2020

One of the two 10G Links into our Huntingdon Data Centre – running from Huntingdon to Manchester is currently down. This is primarily a backup link.

However this does mean that both Huntingdon and Manchester have reduced resilience currently and are ‘at-risk’

The fault has been raised to the carrier and they have already commenced work to restore service

We will update this post as we know more

[Update 23:30]

Update from our suppliers:

Our engineers have arrived on site and performing site checks. We have scheduled for splicers to be dispatched and are awaiting their ETA.

Next Update Due: 01:00 26/06

 

[Update 0:30]

Update from our suppliers

Our engineers have found a fibre event 6km from the Leicester exchange. Engineers are making they way to the area where there is a confirmed fibre break.

[Update 2:30]

Our field teams have located damage on Aylestone road at the junction of Putney Rd West. Road works have cut and dug up multiple cables causing significant damage to cables and joints. It will take longer than expected to assess the full damage before a fix plan can be drawn out.

[Update 5:20]
Damage Confirmed:
1 x 160f
4 x 24f
4 x 144f
2 x 8f
1 x 96f
2 out of 4 sub ducts damaged

Cause:
Road works. Cables chewed up by a rotary cutting machine. Severe damage has been caused to cables.

Update:
construction have completed installation of x 2 new ducts
Cable gang are onsite.
replacement cable has been ordered.

Plan:
Pull in replacement cables when it arrives into the adjacent joints and resplice to restore.

[Update 6:30]

Replacement Cable will start arrviing between 6:45 – 7:00

Once it has arrived the cable gang can start the process of pulling it in.

[Update 9:00]

Our suppliers are continuing to replace the damaged fibres into the new ducts. Once this is completed they will splice the new fibre

[Update 16:00]

The Civil work needed to install this fibre is complete and the replacement fibre is in the ground. Work to splice the new fibre is ongoing. Given there are over 600 fibre pairs affected this will take several hours to complete.

We will continue to update as we have significant information

[Update 11:00 27/6]

All replacement fibres are in place and splicing has completed

However we are still seeing business impact and have sent ground teams to investigate a Transmode link that has remained down.
Once the investigating engineers return with more information we will be in touch to advise of our next actions.

[Update 12:00 27/6]

We currently have Field engineer red light testing the span of the Transmode link.
Once we have further information we will update you again.

[Update 17:00 27/6]

The Link has been up since 13:00 today – there has been some disturbance however this has been stable since 16:30 today. The suppliers have stated that all work has now been completed on this complex fault. We will monitor the link but we believe this is now stable and we have full resilience to our Huntingdon & Manchester loocations