Our engineers along with Telehouse will be working to upgrade the Power Supply to our rack in Telehouse West this evening to allow expansion and additional equipment to be installed. The rack had two resiliant feeds and all core equipment is connected to both feeds.
During the work one Power bar will be replaced at a time – reducing meaning there will for a short period be no resilience on the power in the rack. We do not anticipate any impact but the service should be considered at risk
We also have a fibre link between Telehouse West and Telehouse North which is showing some low level errors. Last week Telehouse tested the fibre which passed – while on site we will replace the optics on this link to see if this clears the issue – if not we will ask Telehouse to run further checks on this fibre. While this is being done again there will be a brief loss of resilience on our fibre ring but we do not expect any customer impact
We will update this if anything changes – and on completion of the work
This work completed on Friday evening – we did see two issues through this work. The router that is usd for some OFNL connections for Air customers had an issue for about an hour due to an unplanned powercycle and not re-starting correctly. A few fibre NNIs also bounced. This also affected one of our Name servers. We are sorry for any service disruption from this.
We have had an alert of one of our fibre links between Cloud House and Telehouse is currently down, At this point it is not affecting the merula Core netwoek but does mean that we have a loss of resiliance between sites.
This is service impacting to customers taking a wave service between these two sites
The issue has been raised to the fibre provider who are actively working on the issue
We will update as we know more here
[update 11/5/23 8:00]
The fibre provider found the cause of the break and restores service on this link approx 1:45am. We have been monitoring this link since and all appears stable.
With the recent migrations from Air Broadband to Merula we need to perform a router reload and to add additional IP addresses to our routers pool to enable a large number of migrations. This should also clear the issue that has been seen the last few days.
Note this maintenance will only affect customers where we have ‘adopted’ the Air router as opposed to moving to our own setup. We anticipate this will be the last change we need to make for some time.
The plan for this evening is
1) Upgrade Router to latest firmware and reboot
2) Add additional IP Address Pools
Due to the nature of this customers on this router may see a longer initial outage from the upgrade / reboot (approx 10-20 minuites), followed by a few shorter drops as the extra IPs are applied and the router interface reset.
We will start this work shortly and update once complete
This work is now completed and we can see the sessions that dropped back and we see traffic passing. If you do see any issues – please let us know
Our Router vendor has released a new firmware for the routers we install at our customer premises
Over the two dated above we will perform a rolling upgrade to all of our Leased Line customer routers.
Each affected customer will see a 5-10 minute drop on their service while their router upgrades and reboots. Upgrades will start after 10pm each evening and complete in the early hours of the following day.
If you have any questions please do email the support team
We will make a small network change to update the MTU between a number of our core devices to improve network performance.
This will likely cause a brief blip as the change is applied.
We will update this post as the work completes
This work completed just after 10pm yesterday evening and was a sucess
We are planning a window to extend our network to two new Data Centres (Cloud House and Slough
There should be no impact on the services, however this is classed as an “At Risk” maintenance window, due to the nature of the works on core network infrastructure.
We will update this post as the work progresses and completes
This wok is completed with no impact
We have had an alert that the UPS in one of our racks is failing – this is discrete UPS as a backup in one of our core cabinets and not the main UPS. We will investigate this and swap out as needed.
Most equipment in the rack is dual fed so there will be no impact. However a few devices are single fed so there may be a small number of devices that briefly power down if the UPS needs swapping out
We plan to do this today as while these is short notice we feel it better to work at the weekend rather than a change or possible issue on a week day.
We will update this post as we are on site and investigate
This work completed yesterday the faulty UPS was swapped out and replaced. There was a short downtime to a number of services in Huntingdon as one switch that was used to interconnect several racks was fed from the UPS that needed to be swapped
We are making a small change to the config of our new LNS servers for broadband services
This may cause a very brief drop on some PPPoE sessions to us – and your connection may drop. This drop should last more than a few seconds while the router re-connects.
We will update this page as the work progresses
The main config change here has been completed – the only noticeable difference for customers is they may see a different upstream IP on a traceroute in some cases). Some sessions did reset as predicted but we saw all sessions that dropped re-connect. If you do see any issues please reboot your router and if that fails contact support the normal way
Following the advice of Juniper we are deploying an upgrade to the Juniper Software to our core switch in telehouse North and East and rebooting to clear an at risk issue on the switch.
Expected downtime per site is approx 10 minutes – we will update the sites one at a time.
While the switches are working find now we the upgrade clears a known issue that can cause instability which we have seen in the Lab.
Through a customer report we have become aware of an issue with a MPLS link across our network. To resolve this we need to change the MTU across some of our core links.
We need to make this change this evening to fully restore service on the link. We will make the change link at a time – Due to the resilience built into our network we don’t anticipate any issue on our network – However with any config change there is a risk of a small number of very brief dropouts or you may see slightly different routing between sites.
We will update this as the work completes