Brief network bounce this morning

As part of our preperation to perform more diagnostics on one of our routers in Telehouse North following the issues we have been seeing with it we have made some changes to the network configuration. These will also reduce the reliance on this router and therefore mean that any further issues will not be customer affecting.

During the config change you *may* have seen a very brief network issue to some locations however this should not have lasted more than 2-3 minutes at a maximum. All work is now complete and if you are still seeing any issues please contact support in the normal manner.

We apologise for the lack of notice here but wanted to ensure that any furter issue did not cause any problems

Central Pipes unbalanced

Due to a BT incident earlier for which we are still waiting clarification we are currently seeing an imbalance on the central pipes used for the  @adsl.merula and @adsl.wizards realms. if your login is one of the above and you are seeing slow speeds you may like to diconnect and re-login in an attempt to change to the less congested central.

You may need to do this more than once since you have a 50/50 chance od re-connecting to the congested cental.

if we can get moew details from BT we will post them here

Possible Disruption Friday 16/5

Further from the recent router issues we have determined that this is caused by a memory issue on the router hardware. We have now sourced replacement RAM and we will have engineers on site on Friday to replace this. The router itself way be down for 10-15 minutes but most if not all traffic should work round this and there should be little if any noticeable issues

Short outage this afternoon

There was a short outage (lasting 10-15 mins) to some servers that are behind one of our core routers (lon3-gw1). Upon investigation it appeared that the router had crashed and was rebooted.

This appears to have restored service and we are now monitoring the router closely – however we do not expect any further issues at this time.

Partial Outage this afternoon

Due to a power failure in one of our London data centres this afternoon some users may have seen issues with some of our services for a short while. Most services were unaffected but thouse routing through Telehouse North may have been affected for approx 30 minutes (many services routed around this issue).

The data centre have confirmed that a number of racks in the suite we are in lost power due to a UPS fault. We are awaiting further assurances that steps have been taken to ensure this issue does not re-occur.

We apologise for any problms caused

ADSL Centrals unbalanced

It appears that we currently have an a considerable inbalance of ADSL users – with more than 80% of our users on one of our centrals.

If you are seeing slow pings or slow connectivity please try rebooting your ADSL router. You may have to do this more than once since you have about a 1/3rd chance of hitting the same ADSL central on re-connect. If things still look bad later we can ask BT to manually balance the pipes BUT this will cause virtually all sessions to reset – something we would like to avoid if possible

Server Migration

We are planning a small number of server  moves this weekend as we continue to move servers to our new data centre. No core connectivity servers will be affected, however some client email servers may be unavailable for a time.

We are also moving a number of ‘private co-lo servers’ but in all cases we have contacted the owners directly.

if there are any issues email support or call us on 0845 330 0666

At Risk

We will have engineers working in some of our racks over the next couple of days getting some of our new fibre links installed and configured. This work should NOT have any impact on service but with any work of this kind there is always a small element of risk.

We will update this ticket once the work is complete

Network Outages

We have experienced a number of outages today to servers connected to the 217.146.97.x IP Range affecting some shared servers and a couple of customer co-lo servers. These were caused by a DOS attack that overwhelmed our firewall.

We have now taken a number of steps to hopefully mitigate this issue including upgrading the firewall and some network maintenance. So far all appears good but we will be monitoring this for the next day or two to ensure all stays stable.

This issue now appears closed and all is working well