Need help or advice?
Call us FREE 0800 298 2375


UPDATE: Broadband packet loss & intermittent connectivity


We have seen the services starting to recover and our normal traffic profile is virtually back to normal. Any subscribers still to reconnect may require a router reboot if the issue persists.

The fault is still open with our supplier until the overall service has been restored. Our apologies again to those affected.


One of our back-haul providers is aware of an ongoing issue affecting a small section of our lines which is causing either packet loss or intermittent connectivity or sometimes both. NOTE: This isn’t affecting all lines but the following STD codes are those seeing issues through this supplier. We expect an update by 14.30. In the meantime, we apologise if your line is one of those affected.

01171 01173 01179 01200 01214 01282 01372 01483 01485 01512 01513 01514 01515 01517 01518 01519 01527 01553 01604 01628 01905 01932 02010 02011 02030 02031 02032 02033 02034 02035 02070 02071 02072 02073 02074 02075 02076 02077 02078 02079 02080 02081 02082 02083 02084 02085 02086 02087 02088 02089 02311 02380

FIXED: some circuits are affected & currently down 17th Sep 9am

10:23am UPDATE: the supplier reports that the problem has been resolved and we believe that all circuits are now back online. Affected circuits may need to reboot their router to bring their session back on stream.

The following exchanges are affected by this issue since 6.21am this morning.

BT and the supplier engineers are en-route to work on-site. No time to fix yet but we will update here as we hear more.


Exchanges affected include Barrow, Buntingford, Bottisham, Burwell, Cambridge, Crafts Hill, Cheveley, Clare, Comberton, Costessey, Cherry Hinton, Cottenham, Dereham, Downham Market, Derdingham, Ely, Fakenham, Fordham Cambs, Feltwell, Fulbourn, Great Chesterford, Girton,Haddenham, Histon, Holt, Halstead, Harston, Kentford, Kings Lynn, Lakenheath, Littleport, Madingley, Melbourne, Mattishall, Norwich North, Rorston, Science Park, Swaffham, Steeple Mordon, Soham, Sawston, Sutton, South Wootton, Swavesey, Teversham, Thaxted, Cambridge Trunk, Trumpington, Terrington St Clements, Tittleshall, Willingham, Waterbeach, Watlington, Watton, Buckden, Crowland, Doddington, Eye, Friday Bridge, Glinton, Huntingdon, Long Sutton, Moulton Chapel, Newton Wisbech, Parson Drove, Papworth St Agnes, Ramsey Hunts, Sawtry, Somersham, St Ives, St Neots, Sutton Bridge, Upwell, Warboys, Werrington, Whittlesey, Woolley, Westwood, Yaxley, Ashwell, Gamlingay and Potton

We are aware some other exchanges may be impacted

Update – we have just started to see some circuits recover but have no update from the carrier as yet

UPDATE: leased lines outage earlier today

RFO 09:40am:

To fix an issue with transient IPv6 and other intermittent routing issues we had seen recently, we were obliged to upgrade the software on one of our core routers. This holds live and backup routes that allow a smooth failover in the case of a single router failing in London. However it now (in an undocumented change from the software supplier) appears with the latest software that the routers set themselves as live on both primary and backup routers – resulting in a routing loop for some IP addresses which had static IP routes originating from this one affected router thus not correctly falling over as previously was the case.

Again, please accept our apologies for this short outage. It shouldn’t have happened.

We are aware of the cause, the problem has now been fixed on this, the one affected router and we have also made sure that all others in the network have been checked and we are confident all are now running properly.

UPDATE 09:27am:

We are aware of the root cause — located at a core switch in our London locations — and are working on bringing this back into service. No ETA yet but we expect this to be resolved shortly. Apologies for the downtime some of you are experiencing.

09:09am We are aware of reports of leased lines down and are investigating. More updates here as we know the cause & ETA to fix.

[resolved] OUTAGE: broadband lines are down for some customers

This was resolved approx 4pm after the faulty switch was swapped on the supplier network

[Update at 14:46]

The supplier is advising us that most lines are now returning on-stream; this may take a few more minutes as the Radius catches up. Anyone still affected after this time should power-off their router for at least 20 minutes to clear any stale session. Please email into if this fails to bring you back live.

We apologise for the lengthy downtime and are looking at further remedial work with the supplier to ensure that such a failure doesn’t affect us in future.

[Update at 14:33]

Apologies for the lack so far of anything concrete in time-to-fix terms; we are escalating this to senior managers inside the supplier to get this fixed.

[Update at 13:33]

Senior engineers are currently on site working on the faulty hardware.

Further updates will be posted once the work has been completed.

[Update at 12:35]

Supplier update: We’re seeing a partial recovery on the equipment.

We’re aware some circuits are still down, our engineers are looking to replace some of the
hardware in the switch stack.

Further updates will be posted when available.

[Update at 12:10]

The supplier has a new switch on route to the site to be swapped out — they’re expecting this to complete by 1pm. We’ll update as this progresses.

We are aware of a problem affecting one of the interconnect switches on a transit supplier network which means that a number of lines dropped earlier this AM and are still down; they and we are working on getting this switch bypassed and replaced. Currently we have no time-frame for a fix but believe this will not be service affecting for too long.


[update 17:40]

We believe that all connections have now been restored. If you still have VOIP problems, please raise a ticket in the usual manner.

We apologise for this down-time and are actively investigating alternatives and fail-over solutions as this is the second such outage in a few months.

[update: 16:00]

We understand that services are slowly being restored. This may take some time so please bear with us — we’re in the hands of the external supplier here. Further updates will follow as we get them through from the supplier network.

[update 14:52]

The supplier advises that the incident currently affecting their network is down to a Denial of service attack. They are now working with transit vendors to stop the attack and restore the network. Further updates will follow.

[update at 13:49]

Please be advised that the supplier is continuing to investigate the cause of this issue and apologise to all our customers who are affected by this. Further updates will follow.

We’re aware of issue affecting our core supplier which has dropped all of our VOIP circuits including our own PBX. There is no time to fix yet but we’ll update this ticket as we know more.

UPDATE: Posting email via Merula servers

[Update 15th March] This work has been completed
[Update 14th March] As of tonight, the main outgoing server will be switched to the new site; anyone using an SSL connections needs to ensure they have changed their outbound server name to
[Update] The new server is up and running – If you use SSL or wish to test sending email please try changing to this server name and ensure you can send email OK. Once we are happy we will update the DNS so that also points to the new server.
We are in the process of bringing into service additional upgraded hardware for our outbound email servers ( and At the same time, we have taken the opportunity to upgraded the security certificate.
A side effect of the certificate change on the mail server will require anyone using SSL in their email client to change the Outbound server name from to
The new servers will be made live over this weekend to allow people time to make the necessary changes.
As always, if you have any concerns, please raise a ticket to

INFORMATION: Telecity at HEX — work on facility UPS

This is a notification of a possible low “at risk” issue but we believe this is minimal as we also have our own UPS units covering the services & systems located at HEX.

This is the most recent update from HEX:

“UPS works planned have been unsuccessful. Temporary UPS units are available should we need it. Further replacement parts are being sourced and a more detailed action plan is being drafted in order to resolve the issue as soon as possible.

Resiliency level remains at N and there is no anticipated interruption to your services.

We will provide a further update once resolved or if there is any change to the current situation”

COMPLETE: Supplier interconnect work planned 9th Feb 2016

No issues.

Planned Duration:

Start: 09/02/2016 00:01 GMT
Finish: 09/02/2016 05:00 GMT


One of our interconnect suppliers will be upgrading their capacity with BTW for DSL services in Telehouse North. This will involve migrating traffic on a pair of interconnects to new, higher capacity ones.

Customer Impact:

We do not anticipate any impact to Merula customer traffic, however all BT DSL services delivered from our Telehouse North node should be considered as AT RISK during this maintenance window.
If you have any queries about this work please raise a support ticket with our helpdesk.

We will issue an ALL CLEAR once this work has been completed.

[CLEARED] Speed & traffic issues across the Merula broadband network

Yesterday was Patch Tuesday from Microsoft. In addition, last night Apple released large updates to both iOS and OS X. As a consequence, all supplier back-haul networks (BT, TalkTalk, Vodafone etc) are seeing a dramatic spike in traffic across their links which is impacting our own services to our customers – those of you on ADSL/FTTC type circuits.

We apologise for this but this traffic should slowly die away over the course of the day so download speeds will start to see improvements.

Please bear with us. If you’re still seeing speed problems tonight/tomorrow, then raise a support ticket in the usual manner.

At risk: Electrical supply Avro Court 26th Nov [Completed]

COMPLETED: All Work was completed during the evening on 26th November and the Data Centre now has full access to both Mains Power, Generator Power and the UPS. There was no outage to hosted servers during the work.

UPDATE: we will have engineers on site on Thursday 26th, to replace parts in our automatic transfer panel. Most of the work will not be service affecting, however there will be a short period where the data centre will be supplied by UPS alone. While we don’t anticipate any disruption, the power to our racks should be considered at risk during this period. All “at risk” work will be carried out outside core business hours and we will have staff and external electricians on site as needed to monitor/resolve any issues found.

UPDATE: the engineers are about to isolate the mains supply and generator and move us over to UPS power. This has been tested to support the data centre for at least 30 minutes but for the period of down-time, we are at risk.

We are aware of an issue with our changeover panel here and an engineer is en-route to work on this.

If the panel needs to go off-line then resilience will be reduced for a short period but the generator and UPS are still available and have just been fully tested as part of our normal weekly maintenance regime and will cut in automatically. As a reminder, the generator has fuel for at least 24-hours of continuous running.

Once the panel issue has been resolved, we’ll update this post.