We have seen alerts that some circuits from Virgin Media have dropped. We believe this is a Virgin Media issue but are currently investigating.
We will update this status as soon as we know more and within the hour at the latest.
Virgin have confirmed an issue at one of their hub sites affecting some parts of Hertfordshire. Senior engineers and the Core Incident team are working on this at the moment.
We will have another update — if it’s not restored — within approx 30 minutes.
We are seeing the affected circuits back on line – at this point we don’t have an ‘all clear’ so they should be seen at risk – but we hope this is resolved
We have received the following update from Virgin Media
Please be advised that @ 16:22
Restoration Details:IOM Card 2 and MDA Card 1 remotely reseated on T0090 Luton Metnet 2a.
restoring all services.
This appears to have been a card failure at one of the Virgin Media Hub Sites – we apologise for the issues that occured here
4th Feb: the supplier network resolved all oustabnding issues on this extended outage. Apologies again to anyone affected.
10.35AM UPDATE: this problem which affects a large number of FTTx lines is still being worked on by Openreach engineers. Latest estimate for the work to be completed is 18.30 tonight. We’ll continue to update here as and when we hear anymore news.
We are aware that one of our suppliers has a major service outage. The supplier engineers have no ETA to fix as yet. We will; update as we hear more. This will affect ADSL & FTTC lines. Our apologies for this outage.
||Next update / Cleared:
||03/02/2020 09:30 (estimated)
||KINGS LANGLEY, LITTLE GADDESDEN, BOLSOVER, STAVELEY, HOLMEWOOD, OLD WHITTINGTON, CLAY CROSS, HOLYMOORSIDE, BASLOW, STONY STRATFORD, HANSLOPE, YARDLEY GOBION, CROXTON, CAXTON, LAISTERDYKE, DUDLEY HILL, UNDERCLIFFE, GRANTON, SOWERBY BRIDGE, ILLINGWORTH, CALDER VALLEY, WOMBWELL,
||Service affecting outage – Loss of Service
||Our supplier engineers are working on the fault.
||01131 01132 01133 01138 01162 01173 01179 01212 01213 01214 01215 01216 01217 01223 01226 01246 01257 01274 01275 01311 01312 01313 01314 01315 01316 01317 01332 01422 01442 01446 01454 01480 01509 01773 01827 01908 01923 01926 01942 01954 02010 02032 02033 02081 02082 02083 02084 02085 02086 02087 02089 02476 02911 02920 02921 02922
RESOLVED: the fibre break was fixed at 8:53pm last night and all services are back to normal. This ticket is now closed.
UPDATE 15:50 — our supplier advises “in relation to the issue identified in Cambridge area regarding loss of service. We are still working hard to resolve these issues as a priority.
Due to heavy traffic in the area, this is impacting our ability to get into the pit location necessary to move services onto spare fibres. Traffic management is required to safely carry out the work and this cannot be implemented until 20:00 this evening due to local authority restrictions.
From 20:00 onwards we can commence repairs so we should see services begin to come back online overnight this evening.
Please accept our apologies at this time as we are treating this matter with the utmost urgency.”
UPDATE: 14:45pm — because of the amount of splicing work needed and the traffic management problems at this location, the ERT has been pushed out to 20.45 today.
UPDATE: 13:00pm — Traffic management is only permitted until 1530Hrs but traffic lights and barriers are on site ready to go. We’ve not been advised of a new ERT as yet.
UPDATE 12:30pm — work continues on-site at the node. Raised for traffic management. Has ERT for 15 minutes time but engineer still working and waiting for response from traffic management team.
Next update due at 1300Hrs.
UPDATE 10:31am — at 1008Hrs an engineer arrived on site. The overnight change was moving to new cable but there appears to be a fibre break. No ERT at present.
UPDATE 10:11am — this is now being treated as a high priority fault as completion has overrun and classified as a MSO due to the affect on Merula (and other) customers through this node.
UPDATE: we are escalating this for a substantive update as the works have again over-run their new completion time.
Emergency unscheduled works by one of our main backhaul suppliers have overrun and mean that one of our 1Gb links to London from the data centre here in Huntingdon is hard down. Traffic is being routed via one of our backhaul links but as this is slower you may see some slowness on some traffic until this work is completed.
We have been told that the work was due to finish at 6am but this has now been pushed out to 8am. We will continue to update here. Our apologies if this affects your connection(s).
We are not aware of any ongoing issues now and believe that the cause of this problem has been identified and remedial action taken. Once again, apologies to anyone affected this morning.
We have removed one of the backhaul lines from our network as this appears to be causing routing issues; we are seeing the majority of the affected lines coming back to their normal latency and response times.
We will continue to update here and apologise again for this affecting you at the start of the working week.
We are aware of an as yet unidentified issue affecting large numbers of our circuits leading to slow-downs, poor quality links and high latency. We are working on this now and will update here as soon as we have more information to share. We apologise that this is affecting you on a Monday morning.
In the event that the phone system here (and the primary support number which routes via here) is unavailable for whatever reason as happened today as our VOIP system failed as part of the outage, there’s a second, fallback (geographical) number: 01480 411616, which rings directly at our support centre. this should be used.
We also suggest that all customers subscribe to our mailing list ensuring that copies of these status messages and updates are delivered to you by email or alternatively if you prefer, to your favourite RSS reader.
If you have any questions please email firstname.lastname@example.org.
UPDATE2: initial indications on further analysis are that the DDoD that was targeted at the network core which triggered one of our core routers to cease routing packets correctly. However, it did not shut-down the BGP protocols as part of this with the result that network traffic was black-holed for a large number of destinations for a period of around 30 minutes. Service was restored after the routing processes were re-started in sequence. We will continue to investigate further and update here but we believe that all services are now returning to normal. Some routers may have a stale session and will require a reboot to bring them back on-line.
UPDATE: we appear to have been the subject of a wide-spread DDoS attack (the source of which we’re currently still investigating). This caused two of our core routers to become unresponsive and this adversely affected DNS and routing outside our network. We have mitigated the attack here and all of the network is coming back on-line now. Please accept our apologies for this outage — we are aware that it affected a large section of our user-base.
We are aware of an issue affecting large sections of our network centred in London. We are working urgently to fix this and will update here as work progresses.