menu

Feed available - Subscribe to our feed to stay up to date on upcoming maintenance and incidents.

Kony Cloud status
Current status and incident report

Tokyo performance degradation

Incident window: December 17, 2019 10:15 to 11:30
November 21, 2019 00:01 to 04:00
Impacted Cloud services:
  • Cloud services in Tokyo

    • Issue affects all customers using multi-tenant Identity services in Tokyo


Impact Level : high

The performance and availability of Cloud services in other regions are not impacted.

[2019-12-17 09:50 UTC] Excessive activity on Identity in the Tokyo region.

[2019-12-17 10:20 UTC] Database capacity exceeded and we are working to determine the cause and restore services.

[2019-12-17 11:28 UTC] Resolved. Space issues resolved and Identity services in the Tokyo region are responding normally.

Cloud Management Console and Workspace Services Hotfixes

Maintenance window: December 9, 2019 00:01 to 04:00
The maintenance window start and end times are local to the region in which your Clouds are hosted. If you are unsure where your Clouds are hosted, you can hover over a Cloud Name in the Manage Clouds page of the Cloud Management Console and the region will be displayed.
Impacted Cloud services:
  • Cloud Management Console

    • Fix RDBMS services not loading issue occurring after Fabric upgrade

  • Workspace

    • Fix RDBMS services not loading issue occurring after Fabric upgrade


Impact Level : minor

No downtime is expected for the impacted Cloud services while this maintenance is being performed. The scheduled maintenance is designed to mitigate disruptions to service availability and performance for the impacted Cloud services. However, it is possible for the impacted Cloud services to be unavailable and/or performance degraded for a short period of time during the maintenance window. Note that no changes are being applied for other Cloud services outside of the list of impacted services above and no service availability or performance disruption is expected for other Cloud services.

Workspace Services Hotfix

Maintenance window: November 21, 2019 00:01 to 04:00
The maintenance window start and end times are local to the region in which your Clouds are hosted. If you are unsure where your Clouds are hosted, you can hover over a Cloud Name in the Manage Clouds page of the Cloud Management Console and the region will be displayed.
Impacted Cloud services:
  • Workspace

    • Display custom error page and handle 404 not found error


Impact Level : minor

No downtime is expected for the impacted Cloud services while this maintenance is being performed. The scheduled maintenance is designed to mitigate disruptions to service availability and performance for the impacted Cloud services. However, it is possible for the impacted Cloud services to be unavailable and/or performance degraded for a short period of time during the maintenance window. Note that no changes are being applied for other Cloud services outside of the list of impacted services above and no service availability or performance disruption is expected for other Cloud services.

Cloud Management Console Release

Maintenance window: November 18, 2019 05:00 to 09:00 UTC
Impacted Cloud services:
  • Cloud Management Console

    • Add Excel as available export option for chart-based standard reports


Impact Level : minor

No downtime is expected for the impacted Cloud services while this maintenance is being performed. The scheduled maintenance is designed to mitigate disruptions to service availability and performance for the impacted Cloud services. However, it is possible for the impacted Cloud services to be unavailable and/or performance degraded for a short period of time during the maintenance window. Note that no changes are being applied for other Cloud services outside of the list of impacted services above and no service availability or performance disruption is expected for other Cloud services.

Cloud SSL Certificate Updates

Maintenance window: November 16, 2019 00:01 UTC to November 17, 2019 00:01 UTC
Impacted Cloud services:
  • Cloud SSL certificates for Identity services (.auth.konycloud.com), Engagement services (.messaging.konycloud.com), App services (.konycloud.com), and Sync services (.sync.konycloud.com)

    • ⚠️ Note: If you have not pinned Kony certificates in your application, no application updates will be necessary. Customers that have pinned SSL certificates will need to download the new certificates, rebuild the applications, including both the old and new certificates, and publish the updated binaries to the various app stores. Your applications should be published before Kony updates the certificates on the cloud servers or customers will no longer be able to connect. New certificates can be found on the Kony Cloud Certificate Preview page. The new certificates can also be downloaded by executing the following commands:

      • Identity: openssl s_client -showcerts -connect konycertificatepreview.auth.konycloud.com:443

      • Engagement: openssl s_client -showcerts -connect konycertificatepreview.messaging.konycloud.com:7443

      • App: openssl s_client -showcerts -connect konycertificatepreview.konycloud.com:8443

      • Sync: openssl s_client -showcerts -connect konycertificatepreview.sync.konycloud.com:9443

    • Customers who have pinned the public key instead of the full certificate (which we strongly recommend and was made available in V8 SP4) may not be required to update their applications. The updated certificates will have the same public keys as the existing certificates.

      • If necessary, you can submit your applications for expedited approval (e.g., Apple has an expedited approval process for critical bugs, or in this case, pinned certificates).


        Impact Level : high

        Customer applications that have pinned SSL certificates will need to be updated as described above prior to this maintenance window. Customer applications that have not pinned SSL certificates will not be affected and will experience no service disruptions during this maintenance window.

        Please refer to our documentation for how to pin the public key of a certificate (which we strongly recommend and was made available in V8 SP4) or how to pin an SSL certificate in your apps (deprecated).

        Integration Server V8 SP4 FP3 HF8 Release

        Maintenance window: November 11, 2019 00:01 to 04:00
        The maintenance window start and end times are local to the region in which your Clouds are hosted. If you are unsure where your Clouds are hosted, you can hover over a Cloud Name in the Manage Clouds page of the Cloud Management Console and the region will be displayed.
        Impacted Cloud services:
        • Integration Server

          • Fix inconsistent formatting of x-kony-reportingparams in request headers when application invokes integration service. In this case, it is url encoded, which is not correct. Instead, it should be returned as a json string, similar to how it is returned via Object service.

          • ⚠️ Any existing Clouds containing dedicated (i.e., single-tenant) Kony Integration Server runtime environments will NOT be affected during the maintenance window (i.e., will NOT be automatically upgraded). If you wish to upgrade, please open a support case, specifying your Cloud(s), desired version (V8 SP4 FP3 HF8), and desired maintenance window (day, time, and timezone) when we can apply the upgrade. Existing Clouds containing shared (i.e., multi-tenant) Kony Integration Server runtime environments (e.g., AppPlatform Gold & Platinum tiers (excluding Platinum Plus), free Fabric services) WILL be automatically upgraded during the maintenance window. For additional details regarding this hotfix, please refer to our release notes page, which will be updated in the next few days with notes for this latest release.


        Impact Level : minor

        No downtime is expected for the impacted Cloud services while this maintenance is being performed. The scheduled maintenance is designed to mitigate disruptions to service availability and performance for the impacted Cloud services. However, it is possible for the impacted Cloud services to be unavailable and/or performance degraded for a short period of time during the maintenance window. Note that no changes are being applied for other Cloud services outside of the list of impacted services above and no service availability or performance disruption is expected for other Cloud services.

        Workspace services - Oregon is unavailable

        Incident window: November 8, 2019 22:16 UTC to 22:48 UTC
        Impacted Cloud services:
        • Workspace (Oregon region only)

          • Other regions are not affected


        Impact Level : high

        Workspace services in Oregon have become unavailable starting at 22:16 UTC. The availability and performance of Workspace services in other regions is not affected. We are investigating.

        [2019-11-08 22:48 UTC] Resolved. The underlying Oregon Workspace database master node had failed and automatically failed over to the replica node, which had been promoted as the new master (and a new replica created after). However, while the database was able to recover with our automation, the Oregon Workspace application did not automatically recover as would be expected and was still yielding errors until we intervened by restarting. Once restarted at 22:48 UTC, the Oregon Workspace services have been available and performing within expected levels. We will be continuing to monitor. We will also be continuing to coordinate with our development team to identify why the Workspace application was unable to automatically recover and to work toward a solution to mitigate these types of incidents requiring our attention beyond our automated incident response controls in the future.

        Workspace services - Oregon is unavailable

        Incident window: November 7, 2019 16:30 UTC to 16:45 UTC
        Impacted Cloud services:
        • Workspace (Oregon region only)

          • Other regions are not affected


        Impact Level : high

        Workspace services in Oregon have become unavailable starting at 16:30 UTC. The availability and performance of Workspace services in other regions is not affected. We are investigating.

        [2019-11-07 16:59 UTC] Resolved. The underlying Oregon Workspace database master node had failed and automatically failed over to the replica node, which had been promoted as the new master (and a new replica created after). However, while the database was able to recover with our automation, the Oregon Workspace application did not automatically recover as would be expected and was still yielding errors until we intervened by restarting. Once restarted at 16:45 UTC, the Oregon Workspace services have been available and performing within expected levels. We will be continuing to monitor. We will also be continuing to coordinate with our development team to identify why the Workspace application was unable to automatically recover and to work toward a solution to mitigate these types of incidents requiring our attention beyond our automated incident response controls in the future.

        Workspace services - Oregon is unavailable

        Incident window: November 4, 2019 17:21 UTC to 18:26 UTC
        Impacted Cloud services:
        • Workspace (Oregon region only)

          • Other regions are not affected


        Impact Level : high

        Workspace services in Oregon have become unavailable starting at 17:21 UTC. The availability and performance of Workspace services in other regions is not affected. We are investigating.

        infrastructure failure of a DB. The database has been recovered and the systems did not auto-restart as expected. We have opened internal development tickets on this scenario and will have the team correct this as we expect the workspace systems to survive this type of issue without intervention.

        [2019-11-04 18:41 UTC] Resolved. The underlying Oregon Workspace database master node had failed and automatically failed over to the replica node, which had been promoted as the new master (and a new replica created after). However, while the database was able to recover with our automation, the Oregon Workspace application did not automatically recover as would be expected and was still yielding errors until we intervened by restarting. Once restarted at 18:26 UTC, the Oregon Workspace services have been available and performing within expected levels. We will be continuing to monitor. We will also be coordinating with our development team to identify why the Workspace application was unable to automatically recover and to work toward a solution to mitigate these types of incidents requiring our attention beyond our automated incident response controls in the future.