Need help or advice?
Call us FREE 0800 298 2375

Outages

OUTAGE: overnight works

RESOLVED: the fibre break was fixed at 8:53pm last night and all services are back to normal. This ticket is now closed.

UPDATE 15:50 — our supplier advises “in relation to the issue identified in Cambridge area regarding loss of service. We are still working hard to resolve these issues as a priority.

Due to heavy traffic in the area, this is impacting our ability to get into the pit location necessary to move services onto spare fibres. Traffic management is required to safely carry out the work and this cannot be implemented until 20:00 this evening due to local authority restrictions.

From 20:00 onwards we can commence repairs so we should see services begin to come back online overnight this evening.

Please accept our apologies at this time as we are treating this matter with the utmost urgency.”

UPDATE: 14:45pm — because of the amount of splicing work needed and the traffic management problems at this location, the ERT has been pushed out to 20.45 today.

UPDATE: 13:00pm — Traffic management is only permitted until 1530Hrs but traffic lights and barriers are on site ready to go. We’ve not been advised of a new ERT as yet.

UPDATE 12:30pm — work continues on-site at the node. Raised for traffic management. Has ERT for 15 minutes time but engineer still working and waiting for response from traffic management team.

Next update due at 1300Hrs.

UPDATE 10:31am — at 1008Hrs an engineer arrived on site. The overnight change was moving to new cable but there appears to be a fibre break. No ERT at present.

UPDATE 10:11am — this is now being treated as a high priority fault as completion has overrun and classified as a MSO due to the affect on Merula (and other) customers through this node.

UPDATE: we are escalating this for a substantive update as the works have again over-run their new completion time.

Emergency unscheduled works by one of our main backhaul suppliers have overrun and mean that one of our 1Gb links to London from the data centre here in Huntingdon is hard down. Traffic is being routed via one of our backhaul links but as this is slower you may see some slowness on some traffic until this work is completed.

We have been told that the work was due to finish at 6am but this has now been pushed out to 8am.  We will continue to update here. Our apologies if this affects your connection(s).

PLANNED WORK: 1/2/18 UPDATE

Further to the outage yesterday and the corrective work by the carrier we will attempt to bring the link that failed back into service this evening. If all works to plan there may be a brief network disruption while the network reconverges. If the link fails again we will remove it from the network and continue work with our supplier.

The link was brought on line approx 12:30am this morning – However the previous issue re-occurred. This has been raised to the supplier who believe this is resolved this correctly now, We will plan a further maintenance shortly to bring the link back into service. In the mean time there is no reduction to service – but there is a slight reduction in resilience to our Manchester PoP

OUTAGE: Network drop 31/1/18

During planned work deemed low risk by a supplier they managed to inject a loop into one of our links. This caused a significant level of packet loss into our network at approx 22:50 on 31st January.
The link was removed from use and service was resumed albeit with reduced resilience.  The issue has been reported to the carrier who have identified a potential problem and resolved this.

OUTAGES: connectivity & latency issues

RESOLVED: 13:14pm

We are not aware of any ongoing issues now and believe that the cause of this problem has been identified and remedial action taken. Once again, apologies to anyone affected this morning.

UPDATE:

We have removed one of the backhaul lines from our network as this appears to be causing routing issues; we are seeing the majority of the affected lines coming back to their normal latency and response times.

We will continue to update here and apologise again for this affecting you at the start of the working week.

We are aware of an as yet unidentified issue affecting large numbers of our circuits leading to slow-downs, poor quality links and high latency. We are working on this now and will update here as soon as we have more information to share. We apologise that this is affecting you on a Monday morning.

UPDATE: core router issues in London & network outages

UPDATE2: initial indications on further analysis are that the DDoD that was targeted at the network core which triggered one of our core routers to cease routing packets correctly. However, it did not shut-down the BGP protocols as part of this with the result that network traffic was black-holed for a large number of destinations for a period of around 30 minutes. Service was restored after the routing processes were re-started in sequence. We will continue to investigate further and update here but we believe that all services are now returning to normal. Some routers may have a stale session and will require a reboot to bring them back on-line.

UPDATE: we appear to have been the subject of a wide-spread DDoS attack (the source of which we’re currently still investigating). This caused two of our core routers to become unresponsive and this adversely affected DNS and routing outside our network.  We have mitigated the attack here and all of the network is coming back on-line now. Please accept our apologies for this outage — we are aware that it affected a large section of our user-base.

We are aware of an issue affecting large sections of our network centred in London. We are working urgently to fix this and will update here as work progresses.

COMPLETED: possible ADSL/FTTC outages: THE maintenance 25/11/17

UPDATE: This work is completed

UPDATE: this should have read THE, apologies.

We are planning a UPS replacement in our data centre in THE following recent problems caused by it, as the rack appears to be at risk of a power loss whenever we work near it. We have a replacement and we plan to swap to that and then investigate at a later date the cause of the issue on the current UPS.

This will mean a brief downtime of some hardware in the rack.  Currently only ADSL/FTTC lines will be customer affecting there, which should just drop and then auto connect at another of our POPS. This will happen during the course of Saturday evening.

If after this work is completed and a router reboot, you’re still unable to connect, please contact support in the usual way.

COMPLETED: last Merula mail server upgrades

The email servers are being upgraded over the weekend starting late on Friday evening to a pair of faster mirrored servers. This won’t affect everyone but during the course of this migration email will appear to have disappeared from in-boxes but this isn’t any cause for concern and is merely a side-effect of the migration process.

No email will be lost; it’s just the final synching process needed to improve the mail-servers for everyone. During the course of the weekend the email should fully re-appear.

New email will be continue to be delivered  straight away – older emails will be restored as the sync process progresses. Any security certificate error messages can be safely ignored for the duration of this migration. These will disappear after both servers are finally fully synced

thos is work is completed if there are any issues with your mail please contact support via the normal routes ASAP.

UPDATE: Broadband packet loss & intermittent connectivity

UPDATE:

We have seen the services starting to recover and our normal traffic profile is virtually back to normal. Any subscribers still to reconnect may require a router reboot if the issue persists.

The fault is still open with our supplier until the overall service has been restored. Our apologies again to those affected.

+++++++++++++++++++

One of our back-haul providers is aware of an ongoing issue affecting a small section of our lines which is causing either packet loss or intermittent connectivity or sometimes both. NOTE: This isn’t affecting all lines but the following STD codes are those seeing issues through this supplier. We expect an update by 14.30. In the meantime, we apologise if your line is one of those affected.

01171 01173 01179 01200 01214 01282 01372 01483 01485 01512 01513 01514 01515 01517 01518 01519 01527 01553 01604 01628 01905 01932 02010 02011 02030 02031 02032 02033 02034 02035 02070 02071 02072 02073 02074 02075 02076 02077 02078 02079 02080 02081 02082 02083 02084 02085 02086 02087 02088 02089 02311 02380

UPDATE 4 Network / Power Issue – Habour Exchange Square

We have lost power to our rack in Harbour Exchange Square. Our UPS held power for a while but the batteries are now exhausted meaning that services that are provided from Harbour Exchange Square are currently affected – this primarily relates to some of our Leased Lines which are single homed. Most other services have re-routed via alternative Data Centres.

The Data Centre Technicians are working to restore power  to the rack asap and we will then expect to see services recover here.

We will update this further as we have the updates.

Note this does not affect services (including leased lines) from other data centres – although there may have been some network instability initially

We are sorry for this issue

UPDATE 14:15 We are starting to see power restored to our rack though there are still some service affected – many are now restored. We are working through these issues and will update this further – However in many cases you should see service restored now

UPDATE 14:22 Equinix (Our Data Centre Supplier in HEX) have just emailed a Incident Update confirming a possible power issue at the facility. We are continuing to see services restore. There are a few remaining services down and we continue to work to resolve these asap. NOTE we have used the opportunity with the power loss to complete the UPS batter replacement – so that there will be no further maintenance on the power within our rack and in the unlikely event of another power fail we now have new batteries in the UPS

UPDATE 15:30 We have restored most services now although it seems the power failure has caused a switch to fail in the rack. All critical services have been moved off of the affected switch and a replacement is being organised to swap out hopefully later this afternoon/evening. There should now be no affected services – However the network should be deemed at risk due to the reduction in redundancy. We will update this once the switch replacement starts

UPDATE 21:10 A replacement switch is now in place and configured in Harbour Exchange square and the remaining services (and resilience) are now restored. There is a need to investigate the power issue further and the data centre may need to change the breaker we are connected to. However this will be a separate planned works and will be announced later. This may be at sort notice BUT will be out of core hours – and will not be today.

We believe service is now fully restored – IF anyone has any ongoing issues please raise them to support via the normal means.

FIXED: some circuits are affected & currently down 17th Sep 9am

10:23am UPDATE: the supplier reports that the problem has been resolved and we believe that all circuits are now back online. Affected circuits may need to reboot their router to bring their session back on stream.

The following exchanges are affected by this issue since 6.21am this morning.

BT and the supplier engineers are en-route to work on-site. No time to fix yet but we will update here as we hear more.

 

Exchanges affected include Barrow, Buntingford, Bottisham, Burwell, Cambridge, Crafts Hill, Cheveley, Clare, Comberton, Costessey, Cherry Hinton, Cottenham, Dereham, Downham Market, Derdingham, Ely, Fakenham, Fordham Cambs, Feltwell, Fulbourn, Great Chesterford, Girton,Haddenham, Histon, Holt, Halstead, Harston, Kentford, Kings Lynn, Lakenheath, Littleport, Madingley, Melbourne, Mattishall, Norwich North, Rorston, Science Park, Swaffham, Steeple Mordon, Soham, Sawston, Sutton, South Wootton, Swavesey, Teversham, Thaxted, Cambridge Trunk, Trumpington, Terrington St Clements, Tittleshall, Willingham, Waterbeach, Watlington, Watton, Buckden, Crowland, Doddington, Eye, Friday Bridge, Glinton, Huntingdon, Long Sutton, Moulton Chapel, Newton Wisbech, Parson Drove, Papworth St Agnes, Ramsey Hunts, Sawtry, Somersham, St Ives, St Neots, Sutton Bridge, Upwell, Warboys, Werrington, Whittlesey, Woolley, Westwood, Yaxley, Ashwell, Gamlingay and Potton

We are aware some other exchanges may be impacted

Update – we have just started to see some circuits recover but have no update from the carrier as yet

EMERGENCY CONTACTS

The main support number is 0845 330 0666 (geographical 01480 355566)

There’s a second, fallback (geographical) number: 01480 411616. All numbers ring directly at our support centre, manned 24hrs x 365 days a year.

We'd also suggest that all customers subscribe to our mailing list (link above); status messages and updates will be delivered by email.

Subscribe