Scaleway
Identified - Une tentative de phishing est en cours sur la partie Dedibox, notre équipe Trust and Safety est en train d'agir pour stopper la situation

Nous vous invitons à ne pas cliquer sur les liens et a supprimer le mail concerné

May 21, 2024 - 12:28 CEST
Monitoring - We have identified and patched the backend storage issue. The corrupted data is currently being cleaned, however this will take several hours for Scaleway products data.
The team is actively monitoring the progress of the fix.

May 16, 2024 - 16:54 CEST
Identified - Cockpit has been encountering an outage since yesterday at 7:15 PM. Our storage backend has been identified as the source of the issue. Both Storage and Cockpit teams are collaborating to resolve it. During this period, customers may experience unavailable data in all regions, along with potential timeouts.
May 15, 2024 - 17:50 CEST
Update - We are continuing to investigate this issue.
May 15, 2024 - 09:43 CEST
Update - We are continuing to investigate this issue.
May 15, 2024 - 09:42 CEST
Update - We are continuing to investigate this issue.
May 15, 2024 - 09:41 CEST
Investigating - Scaleway products metrics are unavailable for all users on FR-PAR.

Users may encounter timeouts on their dashboards.

Our engineers investigate the situation

May 15, 2024 - 09:29 CEST
Identified - No public access in ams1 hall 6 rack 60 bloc A to D
1gbps of trafic
48 servers impacted

May 15, 2024 - 09:50 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 22, 2024 - 20:16 CEST
Update - After further investigation, our team identified that newly attached ressources were also impacted.

Following a resolution process, VPC connectivity between fr-par-1 and fr-par-2 was interupted from 15:15 to 15:39 CEST.

Cockpit functionality is also affected.

Apr 22, 2024 - 16:07 CEST
Investigating - Our team is currently investigating the issue and has identified a potential root cause.
Apr 22, 2024 - 10:52 CEST
Monitoring - During a planned maintenance around 10:00 AM, some slowness could occur if you were using the Scaleway Elements or Dedibox console/API
We are monitoring the issue.

May 14, 2024 - 10:28 CEST
Investigating - Our service is currently experiencing disruption due to blacklisting by Microsoft.
We are actively working with Microsoft to resolve this issue as soon as possible.

Apr 19, 2024 - 00:16 CEST
Identified - We have detected an issue on the server from SD 94863 to SD 94880 access and are unreachable, since 17.04.2024 15:12 UTC

The issue has been forwarded to our team for resolution.

Apr 18, 2024 - 01:34 CEST
Update - You can still manage datacenter intervention from your dedibox console, in Housing
Dec 19, 2023 - 17:47 CET
Update - We are continuing to investigate this issue.
Dec 19, 2023 - 16:48 CET
Investigating - Ticketing directed to Opcore datacenters is currently unavailable to our dedirack clients.
Our team is currently investigating.

Dec 19, 2023 - 16:48 CET
Investigating - We have noticed that problems with connecting to the dedibackup service can occur.
We will get back to you as soon as we have more information on the situation.

Apr 06, 2023 - 12:23 CEST
Elements - AZ Operational
90 days ago
99.24 % uptime
Today
fr-par-1 Operational
90 days ago
98.0 % uptime
Today
fr-par-2 Operational
90 days ago
97.58 % uptime
Today
fr-par-3 Operational
90 days ago
97.58 % uptime
Today
nl-ams-1 Operational
90 days ago
100.0 % uptime
Today
pl-waw-1 Operational
90 days ago
100.0 % uptime
Today
nl-ams-2 Operational
90 days ago
100.0 % uptime
Today
pl-waw-2 Operational
90 days ago
100.0 % uptime
Today
nl-ams-3 Operational
90 days ago
100.0 % uptime
Today
pl-waw-3 Operational
90 days ago
100.0 % uptime
Today
Elements - Products Major Outage
90 days ago
97.9 % uptime
Today
Instances Operational
90 days ago
99.99 % uptime
Today
BMaaS Operational
90 days ago
100.0 % uptime
Today
Object Storage Operational
90 days ago
99.96 % uptime
Today
C14 Cold Storage Operational
90 days ago
100.0 % uptime
Today
Kapsule Operational
90 days ago
97.32 % uptime
Today
DBaaS Operational
90 days ago
91.75 % uptime
Today
LBaaS Operational
90 days ago
99.99 % uptime
Today
Container Registry Operational
90 days ago
98.09 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
Elements Console Operational
90 days ago
94.89 % uptime
Today
IoT Hub Operational
90 days ago
99.99 % uptime
Today
Account API Operational
90 days ago
99.99 % uptime
Today
Billing API Operational
90 days ago
94.9 % uptime
Today
Functions and Containers Operational
90 days ago
96.76 % uptime
Today
Block Storage Operational
90 days ago
100.0 % uptime
Today
Elastic Metal Operational
90 days ago
100.0 % uptime
Today
Apple Silicon M1 Operational
90 days ago
100.0 % uptime
Today
Private Network Operational
90 days ago
99.59 % uptime
Today
Hosting ? Operational
90 days ago
100.0 % uptime
Today
Observability Major Outage
90 days ago
87.22 % uptime
Today
Transactional Email Operational
90 days ago
100.0 % uptime
Today
Jobs Partial Outage
90 days ago
74.16 % uptime
Today
Network Operational
90 days ago
100.0 % uptime
Today
Dedibox - Datacenters Operational
90 days ago
99.21 % uptime
Today
DC2 Operational
90 days ago
99.95 % uptime
Today
DC3 Operational
90 days ago
97.23 % uptime
Today
DC5 Operational
90 days ago
99.96 % uptime
Today
AMS Operational
90 days ago
99.7 % uptime
Today
Dedibox - Products Partial Outage
90 days ago
97.65 % uptime
Today
Dedibox Partial Outage
90 days ago
92.41 % uptime
Today
Hosting Operational
90 days ago
99.95 % uptime
Today
SAN Operational
90 days ago
100.0 % uptime
Today
Dedirack Operational
90 days ago
100.0 % uptime
Today
Dedibackup Operational
90 days ago
100.0 % uptime
Today
Dedibox Console Operational
90 days ago
100.0 % uptime
Today
Domains Partial Outage
90 days ago
88.83 % uptime
Today
RPN Operational
90 days ago
100.0 % uptime
Today
Miscellaneous Operational
90 days ago
100.0 % uptime
Today
Excellence Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
As a reminder of the email sent on April 3rd, 2024 to Public Gateways users, we are changing the Public IP behavior from a NAT IP to a Routed IP.

In order to be compatible with IP Mobility, you should upgrade your Public Gateway.

There are two ways to do this:

1. Manually, at your convenience, through the Console. The consequence will be a short downtime of your Public Gateway for a maximum of 1 minute.

2. Or automatically, during the forced upgrade between May 27th, 2024, and May 31st, 2024. This will involve deleting your current Public Gateway and spawning a new one with the same configuration, compatible with IP Mobility.

Questions? Need help? Do not hesitate to open a ticket or ask on the #public-gateway Slack channel.
The Network Team.

P.S: If you have already upgraded your Public Gateway, you are all set. Please disregard the above message.

Posted on May 14, 2024 - 10:40 CEST
[DEDIBOX] - Maintenance Console May 28, 2024 07:00-09:00 CEST
Dear Dedibox users,

We would like to inform you that console.online.net and api.online.net will be undergoing maintenance on Tuesday, May 28th, from 7:00 AM to 9:00 AM (Paris Time).
The purpose of this maintenance is to enhance our internal support tool to improve better tracking of your issues and support requests. We apologize for any inconvenience this may cause and appreciate your patience and understanding.
Thank you for your cooperation.
Best regards, Dedibox Team

Posted on May 20, 2024 - 12:07 CEST
Past Incidents
May 22, 2024
Resolved - This incident has been resolved.
May 22, 11:38 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 22, 10:29 CEST
Investigating - We are currently investigating this issue.
May 22, 09:09 CEST
Resolved - This incident has been resolved.
May 22, 11:23 CEST
Monitoring - Issue has been identified and solved.
May 22, 10:29 CEST
Investigating - New instances can't be deployed and remain in status initializing.
May 22, 10:00 CEST
May 21, 2024
Resolved - This incident has been resolved.
May 21, 17:15 CEST
Investigating - While updating our infrastructure, some instances might have been stuck in initializing for a few minutes and failovers might have occurred. Around 8:27UTC, 8:33UTC and 9:21UTC.
May 15, 12:01 CEST
Resolved - This incident has been resolved.
May 21, 14:52 CEST
Monitoring - A fix has been implemented and deployed, everything should be back to normal. We are now monitoring the results.
May 20, 18:04 CEST
Investigating - We are currently experiencing some network issues on our fr-par infrastructure. Users with functions/containers in fr-par region might experience the following:
- 5xx errors (or timeouts) when calling their function/container
- high latency when calling their function/container
- sporadic network issues (e.g. DNS not resolving) for processes running in their function/container
We are investigating. Sorry about the inconvenience.

May 15, 16:19 CEST
Completed - The scheduled maintenance has been completed.
May 21, 12:09 CEST
Update - Scheduled maintenance is still in progress. We will provide updates as necessary.
May 21, 10:47 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 21, 10:00 CEST
Scheduled - Description :
Due to an unavoidable upgrade to one part of our Paris infrastructure, some Functions and Containers users might experience sporadic downtime while we run the maintenance. Potential impacts are listed below. Sorry for any inconvenience.

Impacts :
Affects around 2.5% of users with Serverless Functions/Containers on fr-par region.

During this maintenance, affected users might encounter the following:
- functions/containers unexpected restarts
- functions/containers instances not able to scale up
- functions/containers endpoints (default endpoint or custom domains) taking abnormal time to answer to HTTP requests, or ending up in 5xx or 404
- crons not scheduled or delayed
- functions builds delayed

Start :
May 21st, 2024 : 0800 UTC (1000 CEST)

Duration :
2 hour

May 16, 16:31 CEST
May 20, 2024
Resolved - This incident has been resolved.
May 20, 21:33 CEST
Update - We are continuing to monitor for any further issues.
May 8, 10:30 CEST
Monitoring - Between 2024/05/07 18:00 CEST and 2024/05/08 08:00 CEST, we have encountered issues preventing some users to deploy or redeploy their serverless function/container.

Functions/containers deployed during that time might have been either unavailable, or stuck in "error" status despite the function/container working properly: in that case, redeploying the function/container should solve the issue.

Sorry about the inconvenience.

May 8, 10:28 CEST
Resolved - The team in charge fixed the issue. Our support is at your disposition for your tickets should you still experience any issue.
May 20, 09:39 CEST
Investigating - Our team is currently investigating an incident on a switch in datacenter DC2. Impact is on customers with servers in rack A3, room 103 in DC2.
May 20, 09:34 CEST
May 19, 2024

No incidents reported.

May 18, 2024

No incidents reported.

May 17, 2024
Completed - The scheduled maintenance has been completed.
May 17, 17:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 15, 09:00 CEST
Scheduled - Clusters using 1.24 version will automatically be upgraded to 1.25.
Find more details in our version support policy: https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/version-support-policy/

Apr 19, 14:44 CEST
Resolved - The transceiver has been replaced.

No more service interruption expected

May 17, 10:24 CEST
Update - Server in Block F to K are impacted

A transceiver is faulty we will replace it, servers will be unreachable during the replacement

May 17, 09:16 CEST
Monitoring - Our engineers work on the root cause
May 17, 08:28 CEST
Investigating - We suspect a Switch down DC5 incident, Room: 1 1, Bay: D28,

Our engineers investigate the situation

May 17, 07:53 CEST
May 16, 2024
Resolved - This incident has been resolved.
May 16, 16:22 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 16, 15:12 CEST
Investigating - We are currently investigating this issue.
May 16, 14:57 CEST
Completed - The scheduled maintenance has been completed.
May 16, 14:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 16, 10:00 CEST
Scheduled - At 10:00 AM CET on Thursday, 16 May, we are upgrading the services that support IoT Hub. This will affect the API, Console, and auto-provisioning feature for 4 hours which won't be available during this window. MQTT will still work for most of this time, but there will be a 30-minute interruption. During this time, devices will be disconnected and any last-will and retained messages will be lost. However, devices will be able to reconnect automatically once MQTT is available again.
May 14, 17:29 CEST
Resolved - The status is still in progress and the follow-up is done on the https://status.scaleway.com/incidents/m2jj2crt1sfn status, which will now be the reference status.

Our engineers are still working to resolve this situation

May 16, 09:54 CEST
Update - Slowdowns have resumed since 17:00.

Our team is currently investigating.

May 7, 17:42 CEST
Update - Some customers may find it impossible to display their data. Our teams are currently investigating the situation.
May 3, 20:16 CEST
Update - A bad index was found and fixed, services are back to normal. Will monitor.
May 3, 16:09 CEST
Update - We are continuing to investigate this issue.
May 3, 16:08 CEST
Update - Latency is back to normal for most query but query_range are still high latency, investigating
May 2, 17:52 CEST
Investigating - Cockpit Logs cluster at fr-par is suffering and have high latencies.

Query to Scaleway generated logs may have timeouts

Our engineers are working on it

May 2, 16:33 CEST
May 15, 2024

Unresolved incidents: [NETWORK] - No public access in ams1 hall 6 rack 60 bloc A to D, [VPC] Newly created Private Networks on FR-PAR-2 do not communicate with the rest of FR-PAR.

May 14, 2024
Resolved - This incident has been resolved.
May 14, 16:25 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 14, 11:57 CEST
Investigating - Some users having crons on their functions/containers might experience their cron being delayed or not executed at all.

We are investigating. Sorry for any inconvenience.

May 14, 10:54 CEST
Resolved - This incident has been resolved.
May 14, 11:23 CEST
Update - Ingestion of product metrics on nl-ams, was unavailable from 00:00 to 10:00.
Apr 23, 10:04 CEST
Update - Ingestion is back to normal, investigating.
Apr 22, 13:55 CEST
Investigating - Scaleway metrics from nl-ams are not ingested, and will not appear in Cockpit.
Apr 22, 09:33 CEST
Resolved - This incident has been resolved.
May 14, 11:22 CEST
Monitoring - The ingestion of logs and metrics has returned to normal, and we are currently monitoring the situation.
Apr 25, 12:03 CEST
Identified - S3 logs are not sent to cockpit since 15h20 Paris time.
Apr 17, 22:01 CEST
Resolved - This incident has been resolved.
May 14, 10:59 CEST
Monitoring - The service is operationnal. We are continuing to monitor the issue.
May 6, 09:46 CEST
Identified - The issue is back since 6:00 AM (CEST), we are working on a new fix.
May 6, 09:16 CEST
Monitoring - The service is operationnal. We are continuing to monitor the issue.
May 4, 15:54 CEST
Update - We are continuing to work on a fix for this issue.
May 4, 15:44 CEST
Identified - SQS and SNS are down since 1:16 PM (CEST) on FR-PAR.
NATS should work but management through the Scaleway API does not work.
We're investigating.

May 4, 15:44 CEST
May 13, 2024

No incidents reported.

May 12, 2024

No incidents reported.

May 11, 2024

No incidents reported.

May 10, 2024
Completed - The scheduled maintenance has been completed.
May 10, 17:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 29, 09:00 CEST
Update - Update: Scheduled for Apr 29, 2024 - May 10, 2024
Feb 28, 11:04 CET
Scheduled - Kubernetes Kapsule clusters in the FR-PAR region with public-only endpoints will be migrated to Private Networks.

Network downtime: this migration will result in a temporary network loss of 1 to 10 minutes.

With the new default isolation configuration, worker nodes still have their public IPs to access the Internet. After migrating, existing security groups configuration won’t be overridden and RR wildcard DNS still point to public IPs.

Find our dedicated documentation on Kapsule with Private Networks https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/secure-cluster-with-private-network/#how-can-i-migrate-my-existing-clusters-to-regional-private-networks

Dec 1, 10:21 CET
Resolved - This incident has been resolved.
May 10, 16:55 CEST
Investigating - All actions on instances may take time on FR-PAR-1. We are currently investigating this issue.
May 10, 16:20 CEST
Resolved - This incident has been resolved.
May 10, 16:10 CEST
Monitoring - Our engineers have identified and fixed the situation, and we are monitoring it.
May 10, 13:25 CEST
Investigating - Servers randomly lose their connectivity, this happens on the public network and RPN.

Our engineers are currently working to resolve this situation.

May 10, 12:43 CEST
Resolved - This incident has been resolved.
May 10, 15:37 CEST
Monitoring - The functionalities are back in production. We're still monitoring the situation.
Apr 25, 11:05 CEST
Investigating - An issue with our billing system make the current consumption, the invoice listing and the public APIs unavailable. Our team is already working on it to solve this incident as soon as possible.
Apr 25, 08:25 CEST
Resolved - This incident has been resolved.
May 10, 15:37 CEST
Investigating - On Monday, May 6th, from 2:00 PM to 3:00 PM (Paris time), it will not be possible to add, modify, or delete a SEPA direct debit mandate
May 2, 13:11 CEST
May 9, 2024

No incidents reported.

May 8, 2024
Resolved - This incident has been resolved.
May 8, 13:51 CEST
Investigating - Some mailboxes are inaccessible on several different PF

"Filer" seems to be affected

Our engineers are investigating the situation

May 8, 10:23 CEST