Jul 13, 2017 | Uncategorized
We have seen some routing issues on our Telehouse North core today affecting circuits terminated in this location or traffic passing through thn-gw1 or thn-gw2.
The primary issue has now been resolved and all traffic should be routing correctly. We anticipate some follow up work and we will communicate this as soon as it is planned
We apologise for the issues that some customers may have seen
[update] As a side effect of this with the router reload – it seems that one of the core routers decided to load a OLD version of the config which cased an issue for a number of directly connected customers. The config has now been restored to the latest version and we believe routing is now stable and correct
Jun 23, 2017 | Information, Outages
We are aware that our post.merula.net server was slow / unresponsive between approx 5pm and 8:30pm this evening.
This was due to a customer sending a significant volume of SPAM through the server. We have blocked the affected customer and cleared the Mail Queue on this mail server. Service is now restored.
NOTE that there is a small chance of some outgoing emails being lost while the queue was being cleared – although we believe all legitimate emails have been sent OK.
IF you have sent an email during this time via post.merula.net and have not received a reply you may which to re-send it to ensure it has arrived
We apologies for any issues caused here
Jun 13, 2017 | Information, Outages, Unplanned downtime, Update
RFO 09:40am:
To fix an issue with transient IPv6 and other intermittent routing issues we had seen recently, we were obliged to upgrade the software on one of our core routers. This holds live and backup routes that allow a smooth failover in the case of a single router failing in London. However it now (in an undocumented change from the software supplier) appears with the latest software that the routers set themselves as live on both primary and backup routers – resulting in a routing loop for some IP addresses which had static IP routes originating from this one affected router thus not correctly falling over as previously was the case.
Again, please accept our apologies for this short outage. It shouldn’t have happened.
We are aware of the cause, the problem has now been fixed on this, the one affected router and we have also made sure that all others in the network have been checked and we are confident all are now running properly.
UPDATE 09:27am:
We are aware of the root cause — located at a core switch in our London locations — and are working on bringing this back into service. No ETA yet but we expect this to be resolved shortly. Apologies for the downtime some of you are experiencing.
09:09am We are aware of reports of leased lines down and are investigating. More updates here as we know the cause & ETA to fix.
Jun 7, 2017 | Information, Planned Work
The works detailed below should have no effect on services delivered via the Merula network apart from a possible brief period of routing instability as they re-converge after the link comes back on-line
Service Affected:
Level3 Transit Bearer Manchester
Start of maintenance:
15/06/2017 22:00 BST
End of maintenance:
16/06/2017 03:00 BST
Duration of work:
up to 45 minutes within the outage window
Impact of work:
Outage to listed service
Classification of works:
Standard
Description of works:
Third Party maintenance on underlying fibre connections
Jun 6, 2017 | Information
Merula will be respecting a minutes silence at 11am today in memory of those who died and were injured in the London attack on Saturday evening.
Please be aware that if you are in contact with one of our agents at this time you may be asked to wait until the minutes silence has been observed.
May 25, 2017 | Uncategorized
Merula will be respecting a minutes silence at 11am 25/05/2017 in memory of those who died and were injured in the Manchester bombing on Monday.
Please be aware that if you are in contact with one of our agents at this time you may be asked to wait until the minutes silence has been observed.
May 18, 2017 | Information, Outages, Unplanned downtime
We apologise but are aware of a number of leased lines that have dropped because of a hardware fault on a core switch in London.
We will have this restarted within the hour and will be replacing this switch with a new one overnight. More status updates will be posted here.
[update] The initial issue was resolved before 6pm and the replacement hardware was installed in London this evening. This is being monitored but we hope this will resolve the issue once and for all
May 12, 2017 | Information, Planned Work
One of our carriers (TalkTalk) is performing some emergency planned works on their interconnect in Telehouse North. We have resilient Interconnects in other locations however during the work you may see your connections drop and re-connect over our alternative Interconnect
The work is scheduled for between 16/05/2017 20:00 and 17/05/2017 06:00
If there are any issues please contact support via the normal routes
May 6, 2017 | Uncategorized
We are seeing a low level of packet loss on some fibre lines in the Cambridge (01223) area. This appears to be visible in the evening. We have raised this to the Wholesale carrier who have advised us that they are aware and are investigating the root cause.
We will update this case as we hear more – so far we have identified 5 lines affected by this issue
The Carrier has identified an issue on one of their routers and has moved the traffic for the affected circuits over an alternative route. We have now seen the packet loss disappear
May 3, 2017 | Update
This was resolved approx 4pm after the faulty switch was swapped on the supplier network
[Update at 14:46]
The supplier is advising us that most lines are now returning on-stream; this may take a few more minutes as the Radius catches up. Anyone still affected after this time should power-off their router for at least 20 minutes to clear any stale session. Please email into support@merula.net if this fails to bring you back live.
We apologise for the lengthy downtime and are looking at further remedial work with the supplier to ensure that such a failure doesn’t affect us in future.
[Update at 14:33]
Apologies for the lack so far of anything concrete in time-to-fix terms; we are escalating this to senior managers inside the supplier to get this fixed.
[Update at 13:33]
Senior engineers are currently on site working on the faulty hardware.
Further updates will be posted once the work has been completed.
[Update at 12:35]
Supplier update: We’re seeing a partial recovery on the equipment.
We’re aware some circuits are still down, our engineers are looking to replace some of the
hardware in the switch stack.
Further updates will be posted when available.
[Update at 12:10]
The supplier has a new switch on route to the site to be swapped out — they’re expecting this to complete by 1pm. We’ll update as this progresses.
We are aware of a problem affecting one of the interconnect switches on a transit supplier network which means that a number of lines dropped earlier this AM and are still down; they and we are working on getting this switch bypassed and replaced. Currently we have no time-frame for a fix but believe this will not be service affecting for too long.