Apr 5, 2018 | Information, Outages, Unplanned downtime, Update
RESOLVED: the fibre break was fixed at 8:53pm last night and all services are back to normal. This ticket is now closed.
UPDATE 15:50 — our supplier advises “in relation to the issue identified in Cambridge area regarding loss of service. We are still working hard to resolve these issues as a priority.
Due to heavy traffic in the area, this is impacting our ability to get into the pit location necessary to move services onto spare fibres. Traffic management is required to safely carry out the work and this cannot be implemented until 20:00 this evening due to local authority restrictions.
From 20:00 onwards we can commence repairs so we should see services begin to come back online overnight this evening.
Please accept our apologies at this time as we are treating this matter with the utmost urgency.”
UPDATE: 14:45pm — because of the amount of splicing work needed and the traffic management problems at this location, the ERT has been pushed out to 20.45 today.
UPDATE: 13:00pm — Traffic management is only permitted until 1530Hrs but traffic lights and barriers are on site ready to go. We’ve not been advised of a new ERT as yet.
UPDATE 12:30pm — work continues on-site at the node. Raised for traffic management. Has ERT for 15 minutes time but engineer still working and waiting for response from traffic management team.
Next update due at 1300Hrs.
UPDATE 10:31am — at 1008Hrs an engineer arrived on site. The overnight change was moving to new cable but there appears to be a fibre break. No ERT at present.
UPDATE 10:11am — this is now being treated as a high priority fault as completion has overrun and classified as a MSO due to the affect on Merula (and other) customers through this node.
UPDATE: we are escalating this for a substantive update as the works have again over-run their new completion time.
Emergency unscheduled works by one of our main backhaul suppliers have overrun and mean that one of our 1Gb links to London from the data centre here in Huntingdon is hard down. Traffic is being routed via one of our backhaul links but as this is slower you may see some slowness on some traffic until this work is completed.
We have been told that the work was due to finish at 6am but this has now been pushed out to 8am. We will continue to update here. Our apologies if this affects your connection(s).
Feb 1, 2018 | Information, Outages, Planned Work
Further to the outage yesterday and the corrective work by the carrier we will attempt to bring the link that failed back into service this evening. If all works to plan there may be a brief network disruption while the network reconverges. If the link fails again we will remove it from the network and continue work with our supplier.
The link was brought on line approx 12:30am this morning – However the previous issue re-occurred. This has been raised to the supplier who believe this is resolved this correctly now, We will plan a further maintenance shortly to bring the link back into service. In the mean time there is no reduction to service – but there is a slight reduction in resilience to our Manchester PoP
Feb 1, 2018 | Information, Outages, Planned Work, Unplanned downtime
During planned work deemed low risk by a supplier they managed to inject a loop into one of our links. This caused a significant level of packet loss into our network at approx 22:50 on 31st January.
The link was removed from use and service was resumed albeit with reduced resilience. The issue has been reported to the carrier who have identified a potential problem and resolved this.
Jan 15, 2018 | Information, Outages, Unplanned downtime, Update
RESOLVED: 13:14pm
We are not aware of any ongoing issues now and believe that the cause of this problem has been identified and remedial action taken. Once again, apologies to anyone affected this morning.
UPDATE:
We have removed one of the backhaul lines from our network as this appears to be causing routing issues; we are seeing the majority of the affected lines coming back to their normal latency and response times.
We will continue to update here and apologise again for this affecting you at the start of the working week.
We are aware of an as yet unidentified issue affecting large numbers of our circuits leading to slow-downs, poor quality links and high latency. We are working on this now and will update here as soon as we have more information to share. We apologise that this is affecting you on a Monday morning.
Dec 4, 2017 | Information, Outages, Update
UPDATE2: initial indications on further analysis are that the DDoD that was targeted at the network core which triggered one of our core routers to cease routing packets correctly. However, it did not shut-down the BGP protocols as part of this with the result that network traffic was black-holed for a large number of destinations for a period of around 30 minutes. Service was restored after the routing processes were re-started in sequence. We will continue to investigate further and update here but we believe that all services are now returning to normal. Some routers may have a stale session and will require a reboot to bring them back on-line.
UPDATE: we appear to have been the subject of a wide-spread DDoS attack (the source of which we’re currently still investigating). This caused two of our core routers to become unresponsive and this adversely affected DNS and routing outside our network. We have mitigated the attack here and all of the network is coming back on-line now. Please accept our apologies for this outage — we are aware that it affected a large section of our user-base.
We are aware of an issue affecting large sections of our network centred in London. We are working urgently to fix this and will update here as work progresses.
Nov 23, 2017 | Information, Outages, Planned Work, Update
UPDATE: This work is completed
UPDATE: this should have read THE, apologies.
We are planning a UPS replacement in our data centre in THE following recent problems caused by it, as the rack appears to be at risk of a power loss whenever we work near it. We have a replacement and we plan to swap to that and then investigate at a later date the cause of the issue on the current UPS.
This will mean a brief downtime of some hardware in the rack. Currently only ADSL/FTTC lines will be customer affecting there, which should just drop and then auto connect at another of our POPS. This will happen during the course of Saturday evening.
If after this work is completed and a router reboot, you’re still unable to connect, please contact support in the usual way.