Need help or advice?
Call us FREE 0800 298 2375

Outages

Switch Reboot – London Harbour Exchange Square

One of our core switches in Harbour Exchange Square had a routing issue about 14:20 today – after our team checked the switch it was decided the fastest resolution was to perform a routing engine switchover. We would normally plan this out of hours – However as this was causing issues to customers the decision was taken to perform this immediately.

This affected all services directly connected or routed through this switch

The switch reboot has now completed and service should be restored to all locations and services.

If there is still an ongoing issue for any services please report them to support in the normal way

We apologise for the issues this may have caused

 

OUTAGE: scheduled maintenance DSL overnight 10th April 2019

As part of our commitment to service improvement, we would like to inform you that we will be performing planned critical maintenance as part of capacity upgrades on our core network

Maintenance window start: Wednesday 10th April 2019 23:00 GMT

Maintenance window end: Thursday 11th April 2019 06:00 GMT

Expected impact duration: The work will impact some of our DSL subscriber customers, and they will see an intermittent service within the maintenance, with a short period of down time. Service should be restored automatically but it may require a router re-boot. If after this action, service hasn’t been restored, please raise a support case in the normal manner.

OUTAGE: scheduled maintenance broadband 16th/17th Feb 2019

We will be upgrading some of our core routers that terminate broadband sessions during the course of the weekend – each will be down for less than 30 seconds for an important firmware upgrade. These will be staggered over the two days. Most affected customers will reconnect within seconds automatically. If you have any issues, please reboot your router and if that fails for any reason, please call into or email support in the normal way. This will only affect ADSL, FTTC & FTTP lines.  

CLOSED: Portal diagnostics

UPDATE: the supplier code issues have been worked around and all tests are now available again.

—————————–

We are aware of issues working with the APIs from the suppliers that’s affecting the ability to run diagnostics via the Merula portal (https://adslreports.merula.net); we are working on this now and will advise here as soon as this is resolved. Apologies for this loss of service.

OUTAGE: overnight works

RESOLVED: the fibre break was fixed at 8:53pm last night and all services are back to normal. This ticket is now closed.

UPDATE 15:50 — our supplier advises “in relation to the issue identified in Cambridge area regarding loss of service. We are still working hard to resolve these issues as a priority.

Due to heavy traffic in the area, this is impacting our ability to get into the pit location necessary to move services onto spare fibres. Traffic management is required to safely carry out the work and this cannot be implemented until 20:00 this evening due to local authority restrictions.

From 20:00 onwards we can commence repairs so we should see services begin to come back online overnight this evening.

Please accept our apologies at this time as we are treating this matter with the utmost urgency.”

UPDATE: 14:45pm — because of the amount of splicing work needed and the traffic management problems at this location, the ERT has been pushed out to 20.45 today.

UPDATE: 13:00pm — Traffic management is only permitted until 1530Hrs but traffic lights and barriers are on site ready to go. We’ve not been advised of a new ERT as yet.

UPDATE 12:30pm — work continues on-site at the node. Raised for traffic management. Has ERT for 15 minutes time but engineer still working and waiting for response from traffic management team.

Next update due at 1300Hrs.

UPDATE 10:31am — at 1008Hrs an engineer arrived on site. The overnight change was moving to new cable but there appears to be a fibre break. No ERT at present.

UPDATE 10:11am — this is now being treated as a high priority fault as completion has overrun and classified as a MSO due to the affect on Merula (and other) customers through this node.

UPDATE: we are escalating this for a substantive update as the works have again over-run their new completion time.

Emergency unscheduled works by one of our main backhaul suppliers have overrun and mean that one of our 1Gb links to London from the data centre here in Huntingdon is hard down. Traffic is being routed via one of our backhaul links but as this is slower you may see some slowness on some traffic until this work is completed.

We have been told that the work was due to finish at 6am but this has now been pushed out to 8am.  We will continue to update here. Our apologies if this affects your connection(s).

PLANNED WORK: 1/2/18 UPDATE

Further to the outage yesterday and the corrective work by the carrier we will attempt to bring the link that failed back into service this evening. If all works to plan there may be a brief network disruption while the network reconverges. If the link fails again we will remove it from the network and continue work with our supplier.

The link was brought on line approx 12:30am this morning – However the previous issue re-occurred. This has been raised to the supplier who believe this is resolved this correctly now, We will plan a further maintenance shortly to bring the link back into service. In the mean time there is no reduction to service – but there is a slight reduction in resilience to our Manchester PoP

OUTAGE: Network drop 31/1/18

During planned work deemed low risk by a supplier they managed to inject a loop into one of our links. This caused a significant level of packet loss into our network at approx 22:50 on 31st January.
The link was removed from use and service was resumed albeit with reduced resilience.  The issue has been reported to the carrier who have identified a potential problem and resolved this.

OUTAGES: connectivity & latency issues

RESOLVED: 13:14pm

We are not aware of any ongoing issues now and believe that the cause of this problem has been identified and remedial action taken. Once again, apologies to anyone affected this morning.

UPDATE:

We have removed one of the backhaul lines from our network as this appears to be causing routing issues; we are seeing the majority of the affected lines coming back to their normal latency and response times.

We will continue to update here and apologise again for this affecting you at the start of the working week.

We are aware of an as yet unidentified issue affecting large numbers of our circuits leading to slow-downs, poor quality links and high latency. We are working on this now and will update here as soon as we have more information to share. We apologise that this is affecting you on a Monday morning.

UPDATE: core router issues in London & network outages

UPDATE2: initial indications on further analysis are that the DDoD that was targeted at the network core which triggered one of our core routers to cease routing packets correctly. However, it did not shut-down the BGP protocols as part of this with the result that network traffic was black-holed for a large number of destinations for a period of around 30 minutes. Service was restored after the routing processes were re-started in sequence. We will continue to investigate further and update here but we believe that all services are now returning to normal. Some routers may have a stale session and will require a reboot to bring them back on-line.

UPDATE: we appear to have been the subject of a wide-spread DDoS attack (the source of which we’re currently still investigating). This caused two of our core routers to become unresponsive and this adversely affected DNS and routing outside our network.  We have mitigated the attack here and all of the network is coming back on-line now. Please accept our apologies for this outage — we are aware that it affected a large section of our user-base.

We are aware of an issue affecting large sections of our network centred in London. We are working urgently to fix this and will update here as work progresses.

COMPLETED: possible ADSL/FTTC outages: THE maintenance 25/11/17

UPDATE: This work is completed

UPDATE: this should have read THE, apologies.

We are planning a UPS replacement in our data centre in THE following recent problems caused by it, as the rack appears to be at risk of a power loss whenever we work near it. We have a replacement and we plan to swap to that and then investigate at a later date the cause of the issue on the current UPS.

This will mean a brief downtime of some hardware in the rack.  Currently only ADSL/FTTC lines will be customer affecting there, which should just drop and then auto connect at another of our POPS. This will happen during the course of Saturday evening.

If after this work is completed and a router reboot, you’re still unable to connect, please contact support in the usual way.

EMERGENCY CONTACTS

The main support number is 0845 330 0666 (geographical 01480 355566)

There’s a second, fallback (geographical) number: 01480 411616. All numbers ring directly at our support centre, manned 24hrs x 365 days a year.

We'd also suggest that all customers subscribe to our mailing list (link above); status messages and updates will be delivered by email.

Subscribe