Please be advised that @ 16:22Restoration Details:IOM Card 2 and MDA Card 1 remotely reseated on T0090 Luton Metnet 2a.restoring all services.
We are aware of some instability on our core London network – this has caused a number of short periods of packet loss this afternoon. Our NOC team are investigating this currently. We will update this as we know more.
This is being worked on as a high priority currently
We apologise for any issues this may cause
Investigations are ongoing on the cause of this issue. Our engineers continue to work to understand and resolve the root cause of this issue. We can see BGP sessions drop between our core routers but do not fully understand the actual cause of this. Many users may not notice the issue however there will be occasional packet loss and ‘strange’ routes while this is being worked on
While this issue has improved this is still ongoing. We are seeing BGP between two of our core routers (Telehouse North and East) flapping. This is causing some packet loss / routing drops. now we have isolated the reason for the drops we are working to find why this is ongoing. Initial thoughts where that the routers had stale data and we therefore rebooted them – However the issue while improved has not gone away. We will update this further shortly
The network has now been stable for over an hour and we believe the issue is cleared. we will however monitor this overnight before we close this. It appears that this was down to a forwarding issue on a core Juniper Switch in Telehouse West. After ruling out other (more likely) causes, this switch was reloaded approx 22:00 and so far the routing issues and flaps have calmed.
As the switches showed no obvious errors or log issues we will not close this until we are certain – and will investigate in more detail. However we hope there is no further customer issues as a result of this. We are sorry if this issue affected your service today
[Update 26/4/202 10:00]
The network has been stable overnight following the reboot of the core Juniper Switch. We are continuing to monitor – however we do believe this issue is now resolved.
We are sorry for any issues caused by this problem
4th Feb: the supplier network resolved all oustabnding issues on this extended outage. Apologies again to anyone affected.
10.35AM UPDATE: this problem which affects a large number of FTTx lines is still being worked on by Openreach engineers. Latest estimate for the work to be completed is 18.30 tonight. We’ll continue to update here as and when we hear anymore news.
We are aware that one of our suppliers has a major service outage. The supplier engineers have no ETA to fix as yet. We will; update as we hear more. This will affect ADSL & FTTC lines. Our apologies for this outage.
|Start:||03/02/2020 04:15||Next update / Cleared:||03/02/2020 09:30 (estimated)|
|Raised:||03/02/2020 04:15||Cleared Reason:||N/A|
|Detected:||03/02/2020 04:15||Exchange Name:||KINGS LANGLEY, LITTLE GADDESDEN, BOLSOVER, STAVELEY, HOLMEWOOD, OLD WHITTINGTON, CLAY CROSS, HOLYMOORSIDE, BASLOW, STONY STRATFORD, HANSLOPE, YARDLEY GOBION, CROXTON, CAXTON, LAISTERDYKE, DUDLEY HILL, UNDERCLIFFE, GRANTON, SOWERBY BRIDGE, ILLINGWORTH, CALDER VALLEY, WOMBWELL,|
|Incident Headline:||Service affecting outage – Loss of Service|
|Incident Details:||Our supplier engineers are working on the fault.|
|Area Codes:||01131 01132 01133 01138 01162 01173 01179 01212 01213 01214 01215 01216 01217 01223 01226 01246 01257 01274 01275 01311 01312 01313 01314 01315 01316 01317 01332 01422 01442 01446 01454 01480 01509 01773 01827 01908 01923 01926 01942 01954 02010 02032 02033 02081 02082 02083 02084 02085 02086 02087 02089 02476 02911 02920 02921 02922|
We have detected a drop in a lot of broadband sessions from one of our upstream wholesale suppliers. We have raised this as a case and will update you as soon as possible. Both of our interconnects are only allowing a small number of live sessions so we believe this issue is on their network.
If you session has sync but will not connect this is probably the issue – we will update this case witin 30 minutes if not before
We apologise for the issue at this time
We are now seeing sessions return – we do not have an update from the supplier as yet and are following up for a formal update. As soon as we have some news we will update further
One of our core switches in Harbour Exchange Square had a routing issue about 14:20 today – after our team checked the switch it was decided the fastest resolution was to perform a routing engine switchover. We would normally plan this out of hours – However as this was causing issues to customers the decision was taken to perform this immediately.
This affected all services directly connected or routed through this switch
The switch reboot has now completed and service should be restored to all locations and services.
If there is still an ongoing issue for any services please report them to support in the normal way
We apologise for the issues this may have caused
As part of our commitment to service improvement, we would like to inform you that we will be performing planned critical maintenance as part of capacity upgrades on our core network
Maintenance window start: Wednesday 10th April 2019 23:00 GMT
Maintenance window end: Thursday 11th April 2019 06:00 GMT
Expected impact duration: The work will impact some of our DSL subscriber customers, and they will see an intermittent service within the maintenance, with a short period of down time. Service should be restored automatically but it may require a router re-boot. If after this action, service hasn’t been restored, please raise a support case in the normal manner.