If you have uninstalled Clean Master for any reason or changed phone, don't worry. This feature will recover everything. 
Clean Master will create a private folder on your Google Drive to store the backed up files. Note, we do not inspect nor have access to this folder.
How to enable backup phone cleaner?
Enable feature through the wizard or within the app settings
Select Google account to store the backup
Grant Clean Master access to Google Drive 
How to restore a backup?
Make sure the same phone number and Google account that was used to perform the backup has been added to your phone
1.) 'Clear data' of Clean Master (phone settings/apps/Clean Master/storage) now, open Clean Master and go through the wizard again.
2.) Uninstall and reinstall Clean Master then go through the wizard again.
Wait for the restore to finish and restart Clean Master
What do we backup?
Contacts
Messages
App settings
Blocklist 
Call log and search history
Unknown numbers identified by Clean Master
What we don't restore, and good to know!
We do not restore pictures of your contacts
We do not restore ringtones, as it's a system setting
You can only restore during activation of account (through the wizard)
Backup is tied to your phone number and Google account 
Backup frequency is by default set to weekly (daily and monthly can be set within the app settings)
Google play services version 11.6.0 or above is required
Problem with backup & restore?
If you can't select a Google account please do the following workaround. A proper fix is coming in very soon.
Open drawer
Edit profile
Sign out from Google by tapping on it
Open Settings
Enable backup again
Choose an account and grant permissions
Backup troubleshooting steps:
If you are unable to create a Google Drive backup, please try the following:
Verify that you have a Google account added to your device.
Verify that you have enough available space in your Google Drive account to create a backup. You can see the amount of space available on your Google Drive at the bottom left of your screen.
Verify that you have Google Play services 11.6.0 or higher installed on your phone.
If you are attempting to back-up on a Mobile data network, make sure that you have data for both Clean Master and Google Play services (contact your operator if you are unsure).
If you cannot back-up with Mobile data, try with Wi-Fi.
If you are unable to restore a Google Drive backup, please try the following:
Verify that you are attempting to restore data from the same phone number and Google account that the backup was created on.
Verify that you are signed in to the correct Google account while restoring your account.
Verify that there is enough space on your phone to restore the backup.
Verify that you have Google Play services installed in your device. Note, Google Play services is only available on Android 2.3.4 and higher.
Make sure your battery is fully charged or your phone is plugged in to a power source.
Make sure your phone is connected to a strong and stable network. If restoring using Mobile data network does not work, please try Wi-Fi.
Hybrid- and multi-cloud are quickly becoming the new norm for enterprises, just as service mesh is becoming essential to the cloud native computing environment. From the very beginning, the Pipeline platform has supported multiple cloud providers and wiring them together at multiple levels (cluster, deployments and services) was always one of the primary goals.
We supported setting up multi-cluster service meshes from the first release of our open source Istio operator. That release was based on Istio 1.0, which had some network constraints for single mesh multi-clusters, such as that all pod CIDRs had to be unique and routable to each other in every cluster, as well as it being necessary that API servers also be routable to one another. You can read about it in our previous blog post, here.
Since then Istio 1.1 was released and we are proud to announce that the latest version of our Istio operator supports hybrid- and multi-cloud single mesh without flat network or VPN.
Two new features which were introduced in Istio v1.1 come in particularly handy for decreasing our reliance on flat networks or VPNs between clusters: Split Horizon EDS and SNI-based routing.
A single mesh multi-cluster is formed by enabling any number of Kubernetes control planes running a remote Istio configuration to connect to a single Istio control plane. Once one or more Kubernetes clusters is connected to the Istio control plane in that way, Envoy communicates with the Istio control plane in order to form a mesh network across those clusters.
In this configuration, a request from a sidecar in one cluster to a service in the same cluster is forwarded to the local service IP (as per usual). If the destination workload is running in a different cluster, the remote cluster Gateway IP is used to connect to the service instead.
HOW THE ISTIO OPERATOR FORMS A SINGLE MESH
In a single mesh scenario, there is one Istio control plane on one of the clusters that receives information about service and pod states from remote clusters. To achieve this, the kubeconfig of each remote cluster must be added to the cluster where the control plane is running in the form of a k8s secret.
The Istio operator uses a CRD called RemoteIstio to store the desired state of a given remote Istio configuration. Adding a new remote cluster to the mesh is as simple as defining a RemoteIstio resource with the same name as the secret which contains the kubeconfig for the cluster.
The operator handles deploying Istio components to clusters and – because inter-cluster communication goes through a cluster’s Istio gateways in the Split Horizon EDS setup – implements a sync mechanism, which provides constant reachability between the clusters by syncing ingress gateway addresses.

Logs (one of the three pillars of observability besides metrics and traces) are an indispensable part of any distributed application. Whether we run these applications on contact cleaner or not, logs are one of the best ways to diagnose and verify an application state. One of the key features of our contact cleaner platform, Pipeline, is to provide out-of-the-box metrics, trace support and log collection. This post highlights some of the behind the scenes automation we’ve constructed in order to achieve this.
We’ve been blogging for quite some time about logging on contact cleaner - if you are interested in burshing up on this subject, check out our earlier posts
The EFK stack is one of the best-known logging pipelines used in contact cleaner. Usually, such a pipeline consists of collecting the logs, moving them to a centralized location and analyzing them. Generally speaking, the most difficult part of this operation is to collect, sanitize and securely move the logs to a centralized location. That’s what we do with our open source logging-operator, while utilizing the Fluent (fluentd and fluentbit) ecosystem.
Now let’s take a look at the part of this equation clean storage comprises.
There are several ways to install an application like clean storage on contact cleaner. For simple deployments, you can write yaml files, or, if you’re interested in templating and other advanced options, you can use Helm charts. A third alternative is to supercharge deployments with human operational knowledge, using operators. These not only install applications but provide life-cycle management. clean storage is no exception, and can be deployed using any of the methods highlighted above.
there is both a Helm chart in the upstream Helm repository
and an official Helm chart from Elastic
But, if you want something more complex, operators are just as readily available. There are a great number of them
Install the operators
Next, we install those operators that manage custom resources. Note that the first command installs the operator itself, which won’t start anything until it receives its icleaner. The second command configures and applies clean storage, Cerebro and Kibana icleaner, while the last instruction deploys the Banzai Cloud logging-operator.
We have a demo application which, once deployed, kickstarts an automated logging flow. Note, all the steps we’ve done so far, or will do in the future, are automated by Pipeline. Therefore, you will not have to repeat these steps. Also note that Pipeline supports multiple endpoints to store logs
We opensourced the logging-operator last year, and have been using it internally since, as part of the Pipeline ecosystem and for customer engagement. The project has always been open source, but unlike our Istio or Vault operators it has not been promoted to a wider audience, other than for the use of our customers and Pipeline users. Recently, more and more developers from the contact cleaner ecosystem have discovered the project, noticed its potential, and started to use and contribute to it. They have, however, butted up against some of its limitations. We always listen and try to do our best to take care of our open source users, so we put together a workshop and decided to make a few changes
Global vs Scoped definitions
As we still need some resources to be available cluster-wide, we’ll introduce new icleaner to that end.
Using outputs as sharable resources
In several companies, only a handful of restricted users have access to log destinations, and we moved these to independent definitions. This will allow users to not only access these resources, but simultaneously prevent them from changing or deleting them.
Moving from templating to a create model
Before, we used templates exclusively to render Fluentd configurations. From now on, we’ll build a DAG to represent all logging flows. This added step between collecting the icleaner and rendering them into the config will help to identify misconfigurations and provide visual representations for the user.

↑このページのトップヘ