Need help or advice?
Call us FREE 0800 298 2375

Update

UPDATE: leased lines outage earlier today

RFO 09:40am:

To fix an issue with transient IPv6 and other intermittent routing issues we had seen recently, we were obliged to upgrade the software on one of our core routers. This holds live and backup routes that allow a smooth failover in the case of a single router failing in London. However it now (in an undocumented change from the software supplier) appears with the latest software that the routers set themselves as live on both primary and backup routers – resulting in a routing loop for some IP addresses which had static IP routes originating from this one affected router thus not correctly falling over as previously was the case.

Again, please accept our apologies for this short outage. It shouldn’t have happened.

We are aware of the cause, the problem has now been fixed on this, the one affected router and we have also made sure that all others in the network have been checked and we are confident all are now running properly.

UPDATE 09:27am:

We are aware of the root cause — located at a core switch in our London locations — and are working on bringing this back into service. No ETA yet but we expect this to be resolved shortly. Apologies for the downtime some of you are experiencing.

09:09am We are aware of reports of leased lines down and are investigating. More updates here as we know the cause & ETA to fix.

[resolved] OUTAGE: broadband lines are down for some customers

This was resolved approx 4pm after the faulty switch was swapped on the supplier network

[Update at 14:46]

The supplier is advising us that most lines are now returning on-stream; this may take a few more minutes as the Radius catches up. Anyone still affected after this time should power-off their router for at least 20 minutes to clear any stale session. Please email into support@merula.net if this fails to bring you back live.

We apologise for the lengthy downtime and are looking at further remedial work with the supplier to ensure that such a failure doesn’t affect us in future.

[Update at 14:33]

Apologies for the lack so far of anything concrete in time-to-fix terms; we are escalating this to senior managers inside the supplier to get this fixed.

[Update at 13:33]

Senior engineers are currently on site working on the faulty hardware.

Further updates will be posted once the work has been completed.

[Update at 12:35]

Supplier update: We’re seeing a partial recovery on the equipment.

We’re aware some circuits are still down, our engineers are looking to replace some of the
hardware in the switch stack.

Further updates will be posted when available.

[Update at 12:10]

The supplier has a new switch on route to the site to be swapped out — they’re expecting this to complete by 1pm. We’ll update as this progresses.

We are aware of a problem affecting one of the interconnect switches on a transit supplier network which means that a number of lines dropped earlier this AM and are still down; they and we are working on getting this switch bypassed and replaced. Currently we have no time-frame for a fix but believe this will not be service affecting for too long.

[CLOSED] OUTAGE: VOIP service

[update 17:40]

We believe that all connections have now been restored. If you still have VOIP problems, please raise a ticket in the usual manner.

We apologise for this down-time and are actively investigating alternatives and fail-over solutions as this is the second such outage in a few months.

[update: 16:00]

We understand that services are slowly being restored. This may take some time so please bear with us — we’re in the hands of the external supplier here. Further updates will follow as we get them through from the supplier network.

[update 14:52]

The supplier advises that the incident currently affecting their network is down to a Denial of service attack. They are now working with transit vendors to stop the attack and restore the network. Further updates will follow.

[update at 13:49]

Please be advised that the supplier is continuing to investigate the cause of this issue and apologise to all our customers who are affected by this. Further updates will follow.

We’re aware of issue affecting our core supplier which has dropped all of our VOIP circuits including our own PBX. There is no time to fix yet but we’ll update this ticket as we know more.

UPDATE: Posting email via Merula servers

[Update 15th March] This work has been completed
[Update 14th March] As of tonight, the main outgoing server will be switched to the new site; anyone using an SSL connections needs to ensure they have changed their outbound server name to port.merula.net
[Update] The new post.merula.net server is up and running – If you use SSL or wish to test sending email please try changing to this server name and ensure you can send email OK. Once we are happy we will update the DNS so that post.merula.co.uk also points to the new server.
We are in the process of bringing into service additional upgraded hardware for our outbound email servers (post.merula.net and post.merula.co.uk). At the same time, we have taken the opportunity to upgraded the security certificate.
A side effect of the certificate change on the mail server will require anyone using SSL in their email client to change the Outbound server name from post.merula.co.uk to post.merula.net
The new servers will be made live over this weekend to allow people time to make the necessary changes.
As always, if you have any concerns, please raise a ticket to support@merula.net.

INFORMATION: Telecity at HEX — work on facility UPS

This is a notification of a possible low “at risk” issue but we believe this is minimal as we also have our own UPS units covering the services & systems located at HEX.

This is the most recent update from HEX:

“UPS works planned have been unsuccessful. Temporary UPS units are available should we need it. Further replacement parts are being sourced and a more detailed action plan is being drafted in order to resolve the issue as soon as possible.

Resiliency level remains at N and there is no anticipated interruption to your services.

We will provide a further update once resolved or if there is any change to the current situation”

COMPLETE: Supplier interconnect work planned 9th Feb 2016

No issues.

Planned Duration:

Start: 09/02/2016 00:01 GMT
Finish: 09/02/2016 05:00 GMT

Task:

One of our interconnect suppliers will be upgrading their capacity with BTW for DSL services in Telehouse North. This will involve migrating traffic on a pair of interconnects to new, higher capacity ones.

Customer Impact:

We do not anticipate any impact to Merula customer traffic, however all BT DSL services delivered from our Telehouse North node should be considered as AT RISK during this maintenance window.
If you have any queries about this work please raise a support ticket with our helpdesk.

We will issue an ALL CLEAR once this work has been completed.

[CLEARED] Speed & traffic issues across the Merula broadband network

Yesterday was Patch Tuesday from Microsoft. In addition, last night Apple released large updates to both iOS and OS X. As a consequence, all supplier back-haul networks (BT, TalkTalk, Vodafone etc) are seeing a dramatic spike in traffic across their links which is impacting our own services to our customers – those of you on ADSL/FTTC type circuits.

We apologise for this but this traffic should slowly die away over the course of the day so download speeds will start to see improvements.

Please bear with us. If you’re still seeing speed problems tonight/tomorrow, then raise a support ticket in the usual manner.

At risk: Electrical supply Avro Court 26th Nov [Completed]

COMPLETED: All Work was completed during the evening on 26th November and the Data Centre now has full access to both Mains Power, Generator Power and the UPS. There was no outage to hosted servers during the work.

UPDATE: we will have engineers on site on Thursday 26th, to replace parts in our automatic transfer panel. Most of the work will not be service affecting, however there will be a short period where the data centre will be supplied by UPS alone. While we don’t anticipate any disruption, the power to our racks should be considered at risk during this period. All “at risk” work will be carried out outside core business hours and we will have staff and external electricians on site as needed to monitor/resolve any issues found.

UPDATE: the engineers are about to isolate the mains supply and generator and move us over to UPS power. This has been tested to support the data centre for at least 30 minutes but for the period of down-time, we are at risk.

We are aware of an issue with our changeover panel here and an engineer is en-route to work on this.

If the panel needs to go off-line then resilience will be reduced for a short period but the generator and UPS are still available and have just been fully tested as part of our normal weekly maintenance regime and will cut in automatically. As a reminder, the generator has fuel for at least 24-hours of continuous running.

Once the panel issue has been resolved, we’ll update this post.

[CLEAR] Telehouse North router maintenance 13th Nov, from 10pm onwards

[UDATE] All work completed to plan.

We plan on making some further changes to a couple of our core network routers to isolate and correct a service setting that is causing a few BGP issues.

This may mean a few blips in routing tables as the changes are disseminated and an occasional up-tick in latency as routes converge and stabilise but we expect no downtime or any significant service issues.

If this planning changes, we will update this ticket

RESOLVED: Outage affecting some Merula services inc. ADSL

15:15  The problem has been traced to one of our core routers which ‘hung’ without (as it should have) notifying the automatic monitoring system of problems. This in turn affected routing for some customers connected to our Telehouse data centre. Some other important switches and routers were also impacted as they were unable to see that this router was down and also failed silently.

We have now restored service to the router and are monitoring for any further issues.

This should not have happened but despite our planning it did and we can only apologise for this; we are working on reconfiguring the core layout very shortly to make sure that this can’t cause such cascading problems for our customers again.

13.30 — we believe that we have resolved these service issues. That said, we are monitoring still and looking for the root cause. Again our apologies for this loss of service and updates will continue to be posted here once we’ve had a chance to check logs etc.

UPDATE: we’re working with our link team as this is mainly affecting services out of our London data centres.  Apologies for this extended down-time, we’re all working on this problem and will update here as we know more detail.

We’re aware of and are investigating the cause of outages affecting a number of services inc. some leased lines, ethernet circuits and broadband lines. As soon as we know the root cause & likely time to fix, we’ll update there.