From e3375c8cb967b6a13a3b7efa94211b31dce9321d Mon Sep 17 00:00:00 2001 From: Jeff Lockhart Date: Thu, 1 Feb 2024 19:47:57 -0700 Subject: [PATCH] Deployed 60b89addd to 3.1 with MkDocs 1.5.3 and mike 2.0.0 --- 3.1/changelog/index.html | 2 +- 3.1/search/search_index.json | 2 +- 3.1/sitemap.xml | 70 +++++++++++++++++------------------ 3.1/sitemap.xml.gz | Bin 525 -> 525 bytes 4 files changed, 37 insertions(+), 37 deletions(-) diff --git a/3.1/changelog/index.html b/3.1/changelog/index.html index 0ea83b308..88a6be1b4 100644 --- a/3.1/changelog/index.html +++ b/3.1/changelog/index.html @@ -1620,7 +1620,7 @@

3.1.3-1.1.0C SDK v3.1.3 -
  • Update to Kotlin 1.9.22 (1b1ba2e)
  • +
  • Update to Kotlin 1.9.22 (8546e4b)
  • Handle empty log domain set (00db837)
  • Source-incompatible change: Convert @Throws getter functions to properties (#12)
    • Database.getIndexes() -> Database.indexes
    • diff --git a/3.1/search/search_index.json b/3.1/search/search_index.json index 48e3cd03a..9f2407338 100644 --- a/3.1/search/search_index.json +++ b/3.1/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Kotbase","text":"

      Kotlin Multiplatform library for Couchbase Lite

      "},{"location":"#introduction","title":"Introduction","text":"

      Kotbase pairs Kotlin Multiplatform with Couchbase Lite, an embedded NoSQL JSON document database. Couchbase Lite can be used as a standalone client database, or paired with Couchbase Server and Sync Gateway or Capella App Services for cloud to edge data synchronization. Features include:

      • SQL++, key/value, and full-text search queries
      • Observable queries, documents, databases, and replicators
      • Binary document attachments (blobs)
      • Peer-to-peer and cloud-to-edge data sync

      Kotbase provides full Enterprise and Community Edition API support for Android and JVM, native iOS and macOS, and experimental support for available APIs in native Linux and Windows.

      "},{"location":"active-peer/","title":"Active Peer","text":"

      How to set up a replicator to connect with a listener and replicate changes using peer-to-peer sync

      Android enablers

      Allow Unencrypted Network Traffic

      To use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

      Use Background Threads

      As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

      Code Snippets

      All code examples are indicative only. They demonstrate the basic concepts and approaches to using a feature. Use them as inspiration and adapt these examples to best practice when developing applications for your platform.

      "},{"location":"active-peer/#introduction","title":"Introduction","text":"

      This is an Enterprise Edition feature.

      This content provides sample code and configuration examples covering the implementation of Peer-to-Peer Sync over WebSockets. Specifically it covers the implementation of an Active Peer.

      This active peer (also referred to as a client and-or a replicator) will initiate the connection with a Passive Peer (also referred to as a server and-or listener) and participate in the replication of database changes to bring both databases into sync.

      Subsequent sections provide additional details and examples for the main configuration options.

      Secure Storage

      The use of TLS, its associated keys and certificates requires using secure storage to minimize the chances of a security breach. The implementation of this storage differs from platform to platform \u2014 see Using Secure Storage.

      "},{"location":"active-peer/#configuration-summary","title":"Configuration Summary","text":"

      You should configure and initialize a replicator for each Couchbase Lite database instance you want to sync. Example 1 shows the initialization and configuration process.

      Note

      As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

      Example 1. Replication configuration and initialization

      val repl = Replicator(\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(\n            collections to CollectionConfiguration(\n                conflictResolver = ReplicatorConfiguration.DEFAULT_CONFLICT_RESOLVER\n            )\n        ),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Optionally add a change listener\nval token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code ::  ${err.code}\\n$err\")\n    }\n}\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n
      1. Get the listener\u2019s endpoint. Here we use a known URL, but it could be a URL established dynamically in a discovery phase.
      2. Identify the collections from the local database to be used.
      3. Configure how the replication should perform Conflict Resolution.
      4. Configure how the client will authenticate the server. Here we say connect only to servers presenting a self-signed certificate. By default, clients accept only servers presenting certificates that can be verified using the OS bundled Root CA Certificates \u2014 see Authenticating the Listener.
      5. Configure the credentials the client will present to the server. Here we say to provide Basic Authentication credentials. Other options are available \u2014 see Example 7.
      6. Initialize the replicator using your configuration object.
      7. Register an observer, which will notify you of changes to the replication status.
      8. Start the replicator.
      "},{"location":"active-peer/#device-discovery","title":"Device Discovery","text":"

      This phase is optional: If the listener is initialized on a well known URL endpoint (for example, a static IP address or well-known DNS address) then you can configure Active Peers to connect to those.

      Prior to connecting with a listener you may execute a peer discovery phase to dynamically discover peers.

      For the Active Peer this involves browsing-for and selecting the appropriate service using a zero-config protocol such as Network Service Discovery on Android or Bonjour on iOS.

      "},{"location":"active-peer/#configure-replicator","title":"Configure Replicator","text":"

      In this section Configure Target | Sync Mode | Retry Configuration | Authenticating the Listener | Client Authentication

      "},{"location":"active-peer/#configure-target","title":"Configure Target","text":"

      Initialize and define the replication configuration with local and remote database locations using the ReplicatorConfiguration object.

      The constructor provides the server\u2019s URL (including the port number and the name of the remote database to sync with).

      It is expected that the app will identify the IP address and URL and append the remote database name to the URL endpoint, producing for example: wss://10.0.2.2:4984/travel-sample.

      The URL scheme for WebSocket URLs uses ws: (non-TLS) or wss: (SSL/TLS) prefixes.

      Note

      On the Android platform, to use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

      Add the database collections to sync along with the CollectionConfiguration for each to the ReplicatorConfiguration. Multiple collections can use the same configuration, or each their own as needed. A null configuration will use the default configuration values, found in Defaults.Replicator.

      Example 2. Add Target to Configuration

      // initialize the replicator configuration\nval config = ReplicatorConfigurationFactory.newConfig(\n    target = URLEndpoint(\"wss://10.0.2.2:8954/travel-sample\"),\n    collections = mapOf(collections to null)\n)\n

      Note use of the scheme prefix (wss:// to ensure TLS encryption \u2014 strongly recommended in production \u2014 or ws://).

      "},{"location":"active-peer/#sync-mode","title":"Sync Mode","text":"

      Here we define the direction and type of replication we want to initiate.

      We use ReplicatorConfiguration class\u2019s type and isContinuous properties to tell the replicator:

      • The type (or direction) of the replication: PUSH_AND_PULL; PULL; PUSH
      • The replication mode, that is either of:
        • Continuous \u2014 remaining active indefinitely to replicate changed documents (isContinuous=true).
        • Ad-hoc \u2014 a one-shot replication of changed documents (isContinuous=false).

      Example 3. Configure replicator type and mode

      // Set replicator type\ntype = ReplicatorType.PUSH_AND_PULL,\n\n// Configure Sync Mode\ncontinuous = false, // default value\n

      Tip

      Unless there is a solid use-case not to, always initiate a single PUSH_AND_PULL replication rather than identical separate PUSH and PULL replications.

      This prevents the replications generating the same checkpoint docID resulting in multiple conflicts.

      "},{"location":"active-peer/#retry-configuration","title":"Retry Configuration","text":"

      Couchbase Lite\u2019s replication retry logic assures a resilient connection.

      The replicator minimizes the chance and impact of dropped connections by maintaining a heartbeat; essentially pinging the listener at a configurable interval to ensure the connection remains alive.

      In the event it detects a transient error, the replicator will attempt to reconnect, stopping only when the connection is re-established, or the number of retries exceeds the retry limit (9 times for a single-shot replication and unlimited for a continuous replication).

      On each retry the interval between attempts is increased exponentially (exponential backoff) up to the maximum wait time limit (5 minutes).

      The REST API provides configurable control over this replication retry logic using a set of configurable properties \u2014 see Table 1.

      Table 1. Replication Retry Configuration Properties

      Property Use cases Description setHeartbeat()
      • Reduce to detect connection errors sooner
      • Align to load-balancer or proxy keep-alive interval \u2014 see Sync Gateway\u2019s topic Load Balancer - Keep Alive
      The interval (in seconds) between the heartbeat pulses.Default: The replicator pings the listener every 300 seconds. setMaxAttempts() Change this to limit or extend the number of retry attempts. The maximum number of retry attempts
      • Set to zero (0) to use default values
      • Set to one (1) to prevent any retry attempt
      • The retry attempt count is reset when the replicator is able to connect and replicate
      • Default values are:
        • Single-shot replication = 9;
        • Continuous replication = maximum integer value
      • Negative values generate a Couchbase exception InvalidArgumentException
      setMaxAttemptWaitTime() Change this to adjust the interval between retries. The maximum interval between retry attemptsWhile you can configure the maximum permitted wait time, the replicator\u2019s exponential backoff algorithm calculates each individual interval which is not configurable.
      • Default value: 300 seconds (5 minutes)
      • Zero sets the maximum interval between retries to the default of 300 seconds
      • 300 sets the maximum interval between retries to the default of 300 seconds
      • A negative value generates a Couchbase exception, InvalidArgumentException

      When necessary you can adjust any or all of those configurable values \u2014 see Example 4 for how to do this.

      Example 4. Configuring Replication Retries

      val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        //  other config params as required . .\n        heartbeat = 150, \n        maxAttempts = 20,\n        maxAttemptWaitTime = 600\n    )\n)\nrepl.start()\nthis.replicator = repl\n
      "},{"location":"active-peer/#authenticating-the-listener","title":"Authenticating the Listener","text":"

      Define the credentials your app (the client) is expecting to receive from the server (listener) in order to ensure that the server is one it is prepared to interact with.

      Note that the client cannot authenticate the server if TLS is turned off. When TLS is enabled (listener\u2019s default) the client must authenticate the server. If the server cannot provide acceptable credentials then the connection will fail.

      Use ReplicatorConfiguration properties setAcceptOnlySelfSignedServerCertificate and setPinnedServerCertificate, to tell the replicator how to verify server-supplied TLS server certificates.

      • If there is a pinned certificate, nothing else matters, the server cert must exactly match the pinned certificate.
      • If there are no pinned certs and setAcceptOnlySelfSignedServerCertificate is true then any self-signed certificate is accepted. Certificates that are not self-signed are rejected, no matter who signed them.
      • If there are no pinned certificates and setAcceptOnlySelfSignedServerCertificate is false (default), the client validates the server\u2019s certificates against the system CA certificates. The server must supply a chain of certificates whose root is signed by one of the certificates in the system CA bundle.

      Example 5. Set Server TLS security

      CA CertSelf-Signed CertPinned Certificate

      Set the client to expect and accept only CA attested certificates.

      // Configure Server Security\n// -- only accept CA attested certs\nacceptOnlySelfSignedServerCertificate = false,\n

      This is the default. Only certificate chains with roots signed by a trusted CA are allowed. Self-signed certificates are not allowed.

      Set the client to expect and accept only self-signed certificates.

      // Configure Server Authentication --\n// only accept self-signed certs\nacceptOnlySelfSignedServerCertificate = true,\n

      Set this to true to accept any self-signed cert. Any certificates that are not self-signed are rejected.

      Set the client to expect and accept only a pinned certificate.

      // Use the pinned certificate from the byte array (cert)\npinnedServerCertificate = TLSIdentity.getIdentity(\"Our Corporate Id\")\n    ?.certs?.firstOrNull()\n    ?: throw IllegalStateException(\"Cannot find corporate id\"),\n

      Configure the pinned certificate using data from the byte array cert

      "},{"location":"active-peer/#client-authentication","title":"Client Authentication","text":"

      Here we define the credentials that the client can present to the server if prompted to do so in order that the server can authenticate it.

      We use ReplicatorConfiguration's authenticator property to define the authentication method to the replicator.

      "},{"location":"active-peer/#basic-authentication","title":"Basic Authentication","text":"

      Use the BasicAuthenticator to supply basic authentication credentials (username and password).

      Example 6. Basic Authentication

      This example shows basic authentication using username and password:

      // Configure the credentials the\n// client will provide if prompted\nauthenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n
      "},{"location":"active-peer/#certificate-authentication","title":"Certificate Authentication","text":"

      Use the ClientCertificateAuthenticator to configure the client TLS certificates to be presented to the server, on connection. This applies only to the URLEndpointListener.

      Note

      The server (listener) must have isTlsDisabled set to false and have a ListenerCertificateAuthenticator configured, or it will never ask for this client\u2019s certificate.

      The certificate to be presented to the server will need to be signed by the root certificates or be valid based on the authentication callback set to the listener via ListenerCertificateAuthenticator.

      Example 7. Client Cert Authentication

      This example shows client certificate authentication using an identity from secure storage.

      // Provide a client certificate to the server for authentication\nauthenticator = ClientCertificateAuthenticator(\n    TLSIdentity.getIdentity(\"clientId\")\n        ?: throw IllegalStateException(\"Cannot find client id\")\n)\n
      1. Get an identity from secure storage and create a TLSIdentity object
      2. Set the authenticator to ClientCertificateAuthenticator and configure it to use the retrieved identity
      "},{"location":"active-peer/#initialize-replicator","title":"Initialize Replicator","text":"

      Use the Replicator class\u2019s Replicator(ReplicatorConfiguration) constructor, to initialize the replicator with the configuration you have defined. You can, optionally, add a change listener (see Monitor Sync) before starting the replicator running using start().

      Example 8. Initialize and run replicator

      // Create replicator\n// Consider holding a reference somewhere\n// to prevent the Replicator from being GCed\nval repl = Replicator(\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(collections to null),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // set auto-purge behavior\n        // (here we override default)\n        enableAutoPurge = false,\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n
      1. Initialize the replicator with the configuration
      2. Start the replicator
      "},{"location":"active-peer/#monitor-sync","title":"Monitor Sync","text":"

      In this section Change Listeners | Replicator Status | Documents Pending Push

      You can monitor a replication\u2019s status by using a combination of Change Listeners and the replicator.status.activityLevel property \u2014 seeactivityLevel. This enables you to know, for example, when the replication is actively transferring data and when it has stopped.

      "},{"location":"active-peer/#change-listeners","title":"Change Listeners","text":"

      Use this to monitor changes and to inform on sync progress; this is an optional step. You can add a replicator change listener at any point; it will report changes from the point it is registered.

      Tip

      Don\u2019t forget to save the token so you can remove the listener later

      Use the Replicator class to add a change listener as a callback with Replicator.addChangeListener() \u2014 see Example 9 . You will then be asynchronously notified of state changes.

      You can remove a change listener with removeChangeListener(ListenerToken).

      "},{"location":"active-peer/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

      Kotlin developers can take advantage of Flows to monitor replicators.

      fun replChangeFlowExample(repl: Replicator): Flow<ReplicatorActivityLevel> {\n    return repl.replicatorChangesFlow()\n        .map { it.status.activityLevel }\n}\n
      "},{"location":"active-peer/#replicator-status","title":"Replicator Status","text":"

      You can use the ReplicatorStatus class to check the replicator status. That is, whether it is actively transferring data or if it has stopped \u2014 see Example 9.

      The returned ReplicatorStatus structure comprises:

      • activityLevel \u2014 STOPPED, OFFLINE, CONNECTING, IDLE, or BUSY \u2014 see states described in Table 2
      • progress
        • completed \u2014 the total number of changes completed
        • total \u2014 the total number of changes to be processed
      • error \u2014 the current error, if any

      Example 9. Monitor replication

      Adding a Change ListenerUsing replicator.status
      val token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code :: ${err.code}\\n$err\")\n    }\n}\n
      repl.status.let {\n    val progress = it.progress\n    println(\n        \"The Replicator is ${\n            it.activityLevel\n        } and has processed ${\n            progress.completed\n        } of ${progress.total} changes\"\n    )\n}\n
      "},{"location":"active-peer/#replication-states","title":"Replication States","text":"

      Table 2 shows the different states, or activity levels, reported in the API; and the meaning of each.

      Table 2. Replicator activity levels

      State Meaning STOPPED The replication is finished or hit a fatal error. OFFLINE The replicator is offline as the remote host is unreachable. CONNECTING The replicator is connecting to the remote host. IDLE The replication caught up with all the changes available from the server. The IDLE state is only used in continuous replications. BUSY The replication is actively transferring data.

      Note

      The replication change object also has properties to track the progress (change.status.completed and change.status.total). Since the replication occurs in batches the total count can vary through the course of a replication.

      "},{"location":"active-peer/#replication-status-and-app-life-cycle","title":"Replication Status and App Life Cycle","text":""},{"location":"active-peer/#ios","title":"iOS","text":"

      The following diagram describes the status changes when the application starts a replication, and when the application is being backgrounded or foregrounded by the OS. It applies to iOS only.

      Additionally, on iOS, an app already in the background may be terminated. In this case, the Database and Replicator instances will be null when the app returns to the foreground. Therefore, as preventive measure, it is recommended to do a null check when the app enters the foreground, and to re-initialize the database and replicator if any of those are null.

      On other platforms, Couchbase Lite doesn\u2019t react to OS backgrounding or foregrounding events and replication(s) will continue running as long as the remote system does not terminate the connection and the app does not terminate. It is generally recommended to stop replications before going into the background otherwise socket connections may be closed by the OS and this may interfere with the replication process.

      "},{"location":"active-peer/#other-platforms","title":"Other Platforms","text":"

      Couchbase Lite replications will continue running until the app terminates, unless the remote system, or the application, terminates the connection.

      Note

      Recall that the Android OS may kill an application without warning. You should explicitly stop replication processes when they are no longer useful (for example, when the app is in the background and the replication is IDLE) to avoid socket connections being closed by the OS, which may interfere with the replication process.

      "},{"location":"active-peer/#documents-pending-push","title":"Documents Pending Push","text":"

      Tip

      Replicator.isDocumentPending() is quicker and more efficient. Use it in preference to returning a list of pending document IDs, where possible.

      You can check whether documents are waiting to be pushed in any forthcoming sync by using either of the following API methods:

      • Use the Replicator.getPendingDocumentIds() method, which returns a list of document IDs that have local changes, but which have not yet been pushed to the server. This can be very useful in tracking the progress of a push sync, enabling the app to provide a visual indicator to the end user on its status, or decide when it is safe to exit.
      • Use the Replicator.isDocumentPending() method to quickly check whether an individual document is pending a push.

      Example 10. Use Pending Document ID API

      val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(setOf(collection) to null),\n        type = ReplicatorType.PUSH\n    )\n)\n\nval pendingDocs = repl.getPendingDocumentIds(collection)\n\n// iterate and report on previously\n// retrieved pending docIds 'list'\nif (pendingDocs.isNotEmpty()) {\n    println(\"There are ${pendingDocs.size} documents pending\")\n\n    val firstDoc = pendingDocs.first()\n    repl.addChangeListener { change ->\n        println(\"Replicator activity level is ${change.status.activityLevel}\")\n        try {\n            if (!repl.isDocumentPending(firstDoc)) {\n                println(\"Doc ID $firstDoc has been pushed\")\n            }\n        } catch (err: CouchbaseLiteException) {\n            println(\"Failed getting pending docs\\n$err\")\n        }\n    }\n\n    repl.start()\n    this.replicator = repl\n}\n
      1. Replicator.getPendingDocumentIds() returns a list of the document IDs for all documents waiting to be pushed. This is a snapshot and may have changed by the time the response is received and processed.
      2. Replicator.isDocumentPending() returns true if the document is waiting to be pushed, and false otherwise.
      "},{"location":"active-peer/#stop-sync","title":"Stop Sync","text":"

      Stopping a replication is straightforward. It is done using stop(). This initiates an asynchronous operation and so is not necessarily immediate. Your app should account for this potential delay before attempting any subsequent operations.

      You can find further information on database operations in Databases.

      Example 11. Stop replicator

      // Stop replication.\nrepl.stop()\n

      Here we initiate the stopping of the replication using the stop() method. It will stop any active change listener once the replication is stopped.

      "},{"location":"active-peer/#conflict-resolution","title":"Conflict Resolution","text":"

      Unless you specify otherwise, Couchbase Lite\u2019s default conflict resolution policy is applied \u2014 see Handling Data Conflicts.

      To use a different policy, specify a conflict resolver using conflictResolver as shown in Example 12.

      For more complex solutions you can provide a custom conflict resolver - see Handling Data Conflicts.

      Example 12. Using conflict resolvers

      Local WinsRemote WinsMerge
      val localWinsResolver: ConflictResolver = { conflict ->\n    conflict.localDocument\n}\nconfig.conflictResolver = localWinsResolver\n
      val remoteWinsResolver: ConflictResolver = { conflict ->\n    conflict.remoteDocument\n}\nconfig.conflictResolver = remoteWinsResolver\n
      val mergeConflictResolver: ConflictResolver = { conflict ->\n    val localDoc = conflict.localDocument?.toMap()?.toMutableMap()\n    val remoteDoc = conflict.remoteDocument?.toMap()?.toMutableMap()\n\n    val merge: MutableMap<String, Any?>?\n    if (localDoc == null) {\n        merge = remoteDoc\n    } else {\n        merge = localDoc\n        if (remoteDoc != null) {\n            merge.putAll(remoteDoc)\n        }\n    }\n\n    if (merge == null) {\n        MutableDocument(conflict.documentId)\n    } else {\n        MutableDocument(conflict.documentId, merge)\n    }\n}\nconfig.conflictResolver = mergeConflictResolver\n

      Just as a replicator may observe a conflict \u2014 when updating a document that has changed both in the local database and in a remote database \u2014 any attempt to save a document may also observe a conflict, if a replication has taken place since the local app retrieved the document from the database. To address that possibility, a version of the Database.save() method also takes a conflict resolver as shown in Example 13.

      The following code snippet shows an example of merging properties from the existing document (curDoc) into the one being saved (newDoc). In the event of conflicting keys, it will pick the key value from newDoc.

      Example 13. Merging document properties

      val mutableDocument = database.getDocument(\"xyz\")?.toMutable() ?: return\nmutableDocument.setString(\"name\", \"apples\")\ndatabase.save(mutableDocument) { newDoc, curDoc ->\n    if (curDoc == null) {\n        return@save false\n    }\n    val dataMap: MutableMap<String, Any?> = curDoc.toMap().toMutableMap()\n    dataMap.putAll(newDoc.toMap())\n    newDoc.setData(dataMap)\n    true\n}\n

      For more on replicator conflict resolution see Handling Data Conflicts.

      "},{"location":"active-peer/#delta-sync","title":"Delta Sync","text":"

      If delta sync is enabled on the listener, then replication will use delta sync.

      "},{"location":"blobs/","title":"Blobs","text":"

      Couchbase Lite database data model concepts \u2014 blobs

      "},{"location":"blobs/#introduction","title":"Introduction","text":"

      Couchbase Lite uses blobs to store the contents of images, other media files and similar format files as binary objects.

      The blob itself is not stored in the document. It is held in a separate content-addressable store indexed from the document and retrieved only on-demand.

      When a document is synchronized, the Couchbase Lite replicator adds an _attachments dictionary to the document\u2019s properties if it contains a blob \u2014 see Figure 1.

      "},{"location":"blobs/#blob-objects","title":"Blob Objects","text":"

      The blob as an object appears in a document as dictionary property \u2014 see, for example avatar in Figure 1.

      Other properties include length (the length in bytes), and optionally content_type (typically, its MIME type).

      The blob\u2019s data (an image, audio or video content) is not stored in the document, but in a separate content-addressable store, indexed by the digest property \u2014 see Using Blobs.

      "},{"location":"blobs/#constraints","title":"Constraints","text":"
      • Couchbase Lite Blobs can be arbitrarily large. They are only read on demand, not when you load a document.
      • Sync Gateway The maximum content size is 20 MB per blob. If a document\u2019s blob is over 20 MB, the document will be replicated but not the blob.
      "},{"location":"blobs/#using-blobs","title":"Using Blobs","text":"

      The Blob API lets you access the blob\u2019s data content as in-memory data (a ByteArray) or as a Source input stream.

      The code in Example 1 shows how you might add a blob to a document and save it to the database. Here we use avatar as the property key and a jpeg file as the blob data.

      Example 1. Working with blobs

      // kotlinx-io multiplatform file system APIs are still in development\n// However, platform-specific implementations can be created in the meantime\nexpect fun getAsset(file: String): Source?\n\nval mDoc = MutableDocument()\n\ngetAsset(\"avatar.jpg\")?.use { source ->\n  mDoc.setBlob(\"avatar\", Blob(\"image/jpeg\", source))\n  collection.save(mDoc)\n}\n\nval doc = collection.getDocument(mDoc.id)\nval bytes = doc?.getBlob(\"avatar\")?.content\n
      1. Prepare a document to use for the example.
      2. Create the blob using the retrieved image and set image/jpeg as the blob MIME type.
      3. Add the blob to a document, using avatar as the property key.
      4. Saving the document generates a random access key for each blob stored in digest a SHA-1 encrypted property \u2014 see Figure 1.
      5. Use the avatar key to retrieve the blob object later. Note, this is the identity of the blob assigned by us; the replication auto-generates a blob for attachments and assigns its own name to it (for example, blob_1) \u2014 see Figure 1. The digest key will be the same as generated when we saved the blob document.
      "},{"location":"blobs/#syncing","title":"Syncing","text":"

      When a document containing a blob object is synchronized, the Couchbase Lite replicator generates an _attachments dictionary with an auto-generated name for each blob attachment. This is different to the avatar key and is used internally to access the blob content.

      If you view a sync\u2019ed blob document in Couchbase Server Admin Console, you will see something similar to Figure 1, which shows the document with its generated _attachments dictionary, including the digest.

      Figure 1. Sample Blob Document"},{"location":"changelog/","title":"Change Log","text":""},{"location":"changelog/#313-110","title":"3.1.3-1.1.0","text":"

      1 Feb 2023

      • Scopes and Collections \u2014 Couchbase Lite 3.1 API (#11)
        • Android SDK v3.1.3
        • Java SDK v3.1.3
        • Objective-C SDK v3.1.4
        • C SDK v3.1.3
      • Update to Kotlin 1.9.22 (1b1ba2e)
      • Handle empty log domain set (00db837)
      • Source-incompatible change: Convert @Throws getter functions to properties (#12)
        • Database.getIndexes() -> Database.indexes
        • Replicator.getPendingDocumentIds() -> Replicator.pendingDocumentIds
      • Make Expression, as, and from query builder functions infix (#14)
      "},{"location":"changelog/#ktx-extensions","title":"KTX extensions:","text":"
      • Add Expression math operator functions (148399d)
      • Add fetchContext to documentFlow, default to Dispatchers.IO (2abe61a)
      • Add mutableArrayOf, mutableDictOf, and mutableDocOf, collection and doc creation functions (#13)
      • selectDistinct, from, as, and groupBy convenience query builder functions (#14)
      "},{"location":"changelog/#3015-101","title":"3.0.15-1.0.1","text":"

      15 Dec 2023

      • Make Replicator AutoCloseable (#2)
      • Avoid memory leaks with memScoped toFLString() (#3)
      • Update Couchbase Lite to 3.0.15 (#4):
        • Android SDK v3.0.15
        • Java SDK v3.0.15
        • Objective-C SDK v3.0.15
        • C SDK v3.0.15
      • Update to Kotlin 1.9.21 (#5)
      • K2 compiler compatibility (#7)
      • Update kotlinx-serialization, kotlinx-datetime, and kotlinx-atomicfu (#8)
      • Use default hierarchy template source set names (#9)
      "},{"location":"changelog/#3012-100","title":"3.0.12-1.0.0","text":"

      1 Nov 2023

      Initial public release

      Using Couchbase Lite:

      • Android SDK v3.0.12
      • Java SDK v3.0.12
      • Objective-C SDK v3.0.12
      • C SDK v3.0.12
      "},{"location":"community/","title":"Community","text":"

      Join the #couchbase channel of the Kotlin Slack.

      Browse the Couchbase Community Hub.

      Chat in the Couchbase Discord.

      Post in the Couchbase Forums.

      "},{"location":"databases/","title":"Databases","text":"

      Working with Couchbase Lite databases

      "},{"location":"databases/#database-concepts","title":"Database Concepts","text":"

      Databases created on Couchbase Lite can share the same hierarchical structure as Couchbase Server or Capella databases. This makes it easier to sync data between mobile applications and applications built using Couchbase Server or Capella.

      Figure 1. Couchbase Lite Database Hierarchy

      Although the terminology is different, the structure can be mapped to relational database terms:

      Table 1. Relational Database \u2192 Couchbase

      Relational database Couchbase Database Database Schema Scope Table Collection

      This structure gives you plenty of choices when it comes to partitioning your data. The most basic structure is to use the single default scope with a single default collection; or you could opt for a structure that allows you to split your collections into logical scopes.

      Figure 2. Couchbase Lite Examples

      Storing local configuration

      You may not need to sync all the data related for a particular application. You can set up a scope that syncs data, and a second scope that doesn\u2019t.

      One reason for doing this is to store local configuration data (such as the preferred screen orientation or keyboard layout). Since this information only relates to a particular device, there is no need to sync it:

      local data scope Contains information pertaining to the device. syncing data scope Contains information pertaining to the user, which can be synced back to the cloud for use on the web or another device.

      "},{"location":"databases/#create-or-open-database","title":"Create or Open Database","text":"

      You can create a new database and-or open an existing database, using the Database class. Just pass in a database name and optionally a DatabaseConfiguration \u2014 see Example 1.

      Things to watch for include:

      • If the named database does not exist in the specified, or default, location then a new one is created
      • The database is created in a default location unless you specify a directory for it \u2014 see DatabaseConfiguration and DatabaseConfiguration.setDirectory()

      Tip

      Best Practice is to always specify the path to the database explicitly.

      Typically, the default location is the application sandbox or current working directory.

      See also Finding a Database File.

      Example 1. Open or create a database

      val database = Database(\n    \"my-db\",\n    DatabaseConfigurationFactory.newConfig(\n        \"path/to/database\"\n    )\n)\n

      Tip

      \"path/to/database\" might be a platform-specific location. Use expect/actual or dependency injection to provide a platform-specific database path.

      "},{"location":"databases/#close-database","title":"Close Database","text":"

      You are advised to incorporate the closing of all open databases into your application workflow.

      Closing a database is simple, just use Database.close() \u2014 see Example 2. This also closes active replications, listeners and-or live queries connected to the database.

      Note

      Closing a database soon after starting a replication involving it can cause an exception as the asynchronous replicator (start) may not yet be connected.

      Example 2. Close a Database

      database.close()\n
      "},{"location":"databases/#database-encryption","title":"Database Encryption","text":"

      This is an Enterprise Edition feature.

      Kotbase includes the ability to encrypt Couchbase Lite databases. This allows mobile applications to secure the data at rest, when it is being stored on the device. The algorithm used to encrypt the database is 256-bit AES.

      "},{"location":"databases/#enabling","title":"Enabling","text":"

      To enable encryption, use DatabaseConfiguration.setEncryptionKey() to set the encryption key of your choice. Provide this encryption key every time the database is opened \u2014 see Example 3.

      Example 3. Configure Database Encryption

      val db = Database(\n    \"my-db\",\n    DatabaseConfigurationFactory.newConfig(\n        encryptionKey = EncryptionKey(\"PASSWORD\")\n    )\n)\n
      "},{"location":"databases/#persisting","title":"Persisting","text":"

      Couchbase Lite does not persist the key. It is the application\u2019s responsibility to manage the key and store it in a platform specific secure store such as Apple\u2019s Keychain or Android\u2019s Keystore.

      "},{"location":"databases/#opening","title":"Opening","text":"

      An encrypted database can only be opened with the same language SDK that was used to encrypt it in the first place. So a database encrypted with Kotbase on Android (which uses the Couchbase Lite Android SDK) and then exported, is readable only by Kotbase on Android or the Couchbase Lite Android SDK.

      "},{"location":"databases/#changing","title":"Changing","text":"

      To change an existing encryption key, open the database using its existing encryption-key and use Database.changeEncryptionKey() to set the required new encryption-key value.

      "},{"location":"databases/#removing","title":"Removing","text":"

      To remove encryption, open the database using its existing encryption-key and use Database.changeEncryptionKey() with a null value as the encryption key.

      "},{"location":"databases/#finding-a-database-file","title":"Finding a Database File","text":""},{"location":"databases/#android","title":"Android","text":"

      When the application is running on the Android emulator, you can locate the application\u2019s data folder and access the database file by using the adb CLI tools. For example, to list the different databases on the emulator, you can run the following commands.

      Example 4. List files

      $ adb shell\n$ su\n$ cd /data/data/{APPLICATION_ID}/files\n$ ls\n

      The adb pull command can be used to pull a specific database to your host machine.

      Example 5. Pull using adb command

      $ adb root\n$ adb pull /data/data/{APPLICATION_ID}/files/{DATABASE_NAME}.cblite2 .\n
      "},{"location":"databases/#ios","title":"iOS","text":"

      When the application is running on the iOS simulator, you can locate the application\u2019s sandbox directory using the OpenSim utility.

      "},{"location":"databases/#database-maintenance","title":"Database Maintenance","text":"

      From time to time it may be necessary to perform certain maintenance activities on your database, for example to compact the database file, removing unused documents and blobs no longer referenced by any documents.

      Couchbase Lite\u2019s API provides the Database.performMaintenance() method. The available maintenance operations, including compact are as shown in the enum MaintenanceType to accomplish this.

      This is a resource intensive operation and is not performed automatically. It should be run on-demand using the API. If in doubt, consult Couchbase support.

      "},{"location":"databases/#command-line-tool","title":"Command Line Tool","text":"

      cblite is a command-line tool for inspecting and querying Couchbase Lite databases.

      You can download and build it from the couchbaselabs GitHub repository.

      "},{"location":"databases/#troubleshooting","title":"Troubleshooting","text":"

      You should use console logs as your first source of diagnostic information. If the information in the default logging level is insufficient you can focus it on database errors and generate more verbose messages \u2014 see Example 6.

      For more on using Couchbase logs \u2014 see Using Logs.

      Example 6. Increase Level of Database Log Messages

      Database.log.console.domains = setOf(LogDomain.DATABASE) \n
      "},{"location":"differences/","title":"Differences from Java SDK","text":"

      Kotbase's API aligns with the Couchbase Lite Java and Android KTX SDKs. Migrating existing Kotlin code can be as straightforward as changing the import package from com.couchbase.lite to kotbase, with some exceptions:

      • Java callback functional interfaces are implemented as Kotlin function types.
      • File, URL, and URI APIs are represented as strings.
      • Date APIs use kotlinx-datetime's Instant.
      • InputStream APIs use kotlinx-io's Source.
      • Executor APIs use Kotlin's CoroutineContext.
      • Certificate APIs are available as raw ByteArrays or in platform-specific code.
      • There's no need to explicitly call CouchbaseLite.init(). Initialization functions can still be called with custom parameters in JVM and Android platform code.
      • Efforts have been made to detect and throw Kotlin exceptions for common error conditions, but NSError may still leak through on Apple platforms. Please report any occurrences that may deserve addressing.
      • Some deprecated APIs are omitted.
      • While not available in the Java SDK, as Java doesn't support operator overloading, Fragment subscript APIs are available in Kotbase, similar to Swift, Objective-C, and .NET.
      "},{"location":"documents/","title":"Documents","text":"

      Couchbase Lite concepts \u2014 Data model \u2014 Documents

      "},{"location":"documents/#overview","title":"Overview","text":""},{"location":"documents/#document-structure","title":"Document Structure","text":"

      In Couchbase Lite the term 'document' refers to an entry in the database. You can compare it to a record, or a row in a table.

      Each document has an ID or unique identifier. This ID is similar to a primary key in other databases.

      You can specify the ID programmatically. If you omit it, it will be automatically generated as a UUID.

      Note

      Couchbase documents are assigned to a Collection. The ID of a document must be unique within the Collection it is written to. You cannot change it after you have written the document.

      The document also has a value which contains the actual application data. This value is stored as a dictionary of key-value (k-v) pairs. The values can be made of up several different Data Types such as numbers, strings, arrays, and nested objects.

      "},{"location":"documents/#data-encoding","title":"Data Encoding","text":"

      The document body is stored in an internal, efficient, binary form called Fleece. This internal form can be easily converted into a manageable native dictionary format for manipulation in applications.

      Fleece data is stored in the smallest format that will hold the value whilst maintaining the integrity of the value.

      "},{"location":"documents/#data-types","title":"Data Types","text":"

      The Document class offers a set of property accessors for various scalar types, such as:

      • Boolean
      • Date
      • Double
      • Float
      • Int
      • Long
      • String

      These accessors take care of converting to/from JSON encoding, and make sure you get the type you expect.

      In addition to these basic data types Couchbase Lite provides for the following:

      • Dictionary represents a read-only key-value pair collection
      • MutableDictionary represents a writeable key-value pair collection
      • Array represents a readonly ordered collection of objects
      • MutableArray represents a writeable collection of objects
      • Blob represents an arbitrary piece of binary data
      "},{"location":"documents/#json","title":"JSON","text":"

      Couchbase Lite also provides for the direct handling of JSON data implemented in most cases by the provision of a toJSON() method on appropriate API classes (for example, on MutableDocument, Dictionary, Blob, and Array) \u2014 see Working with JSON Data.

      "},{"location":"documents/#constructing-a-document","title":"Constructing a Document","text":"

      An individual document often represents a single instance of an object in application code.

      You can consider a document as the equivalent of a 'row' in a relational table, with each of the document\u2019s attributes being equivalent to a 'column'.

      Documents can contain nested structures. This allows developers to express many-to-many relationships without requiring a reference or join table, and is naturally expressive of hierarchical data.

      Most apps will work with one or more documents, persisting them to a local database and optionally syncing them, either centrally or to the cloud.

      In this section we provide an example of how you might create a hotel document, which provides basic contact details and price data.

      Data Model
      hotel: {\n  type: string (value = `hotel`)\n  name: string\n  address: dictionary {\n    street: string\n    city: string\n    state: string\n    country: string\n    code: string\n  }\n  phones: array\n  rate: float\n}\n
      "},{"location":"documents/#open-a-database","title":"Open a Database","text":"

      First open your database. If the database does not already exist, Couchbase Lite will create it for you.

      Couchbase documents are assigned to a Collection. All the CRUD examples in this document operate on a collection object.

      // Get the database (and create it if it doesn\u2019t exist).\nval config = DatabaseConfiguration()\nconfig.directory = \"path/to/db\"\nval database = Database(\"getting-started\", config)\nval collection = database.getCollection(\"myCollection\")\n    ?: throw IllegalStateException(\"collection not found\")\n

      See Databases for more information

      "},{"location":"documents/#create-a-document","title":"Create a Document","text":"

      Now create a new document to hold your application\u2019s data.

      Use the mutable form, so that you can add data to the document.

      // Create your new document\nval mutableDoc = MutableDocument()\n

      For more on using Documents, see Document Initializers and Mutability.

      "},{"location":"documents/#create-a-dictionary","title":"Create a Dictionary","text":"

      Now create a mutable dictionary (address).

      Each element of the dictionary value will be directly accessible via its own key.

      // Create and populate mutable dictionary\n// Create a new mutable dictionary and populate some keys/values\nval address = MutableDictionary()\naddress.setString(\"street\", \"1 Main st.\")\naddress.setString(\"city\", \"San Francisco\")\naddress.setString(\"state\", \"CA\")\naddress.setString(\"country\", \"USA\")\naddress.setString(\"code\", \"90210\")\n

      Tip

      The Kotbase KTX extensions provide an idiomatic MutableDictionary creation function:

      val address = mutableDictOf(\n    \"street\" to \"1 Main st.\",\n    \"city\" to \"San Francisco\",\n    \"state\" to \"CA\",\n    \"country\" to \"USA\",\n    \"code\" to \"90210\"\n)\n

      Learn more about Using Dictionaries.

      "},{"location":"documents/#create-an-array","title":"Create an Array","text":"

      Since the hotel may have multiple contact numbers, provide a field (phones) as a mutable array.

      // Create and populate mutable array\nval phones = MutableArray()\nphones.addString(\"650-000-0000\")\nphones.addString(\"650-000-0001\")\n

      Tip

      The Kotbase KTX extensions provide an idiomatic MutableArray creation function:

      val phones = mutableArrayOf(\n    \"650-000-0000\",\n    \"650-000-0001\"\n)\n

      Learn more about Using Arrays.

      "},{"location":"documents/#populate-a-document","title":"Populate a Document","text":"

      Now add your data to the mutable document created earlier. Each data item is stored as a key-value pair.

      // Initialize and populate the document\n\n// Add document type to document properties \nmutableDoc.setString(\"type\", \"hotel\")\n\n// Add hotel name string to document properties \nmutableDoc.setString(\"name\", \"Hotel Java Mo\")\n\n// Add float to document properties \nmutableDoc.setFloat(\"room_rate\", 121.75f)\n\n// Add dictionary to document's properties \nmutableDoc.setDictionary(\"address\", address)\n\n// Add array to document's properties \nmutableDoc.setArray(\"phones\", phones)\n

      Note

      Couchbase recommends using a type attribute to define each logical document type.

      "},{"location":"documents/#save-a-document","title":"Save a Document","text":"

      Now persist the populated document to your Couchbase Lite database. This will auto-generate the document id.

      // Save the document changes \ncollection.save(mutableDoc)\n
      "},{"location":"documents/#close-the-database","title":"Close the Database","text":"

      With your document saved, you can now close our Couchbase Lite database.

      // Close the database \ndatabase.close()\n
      "},{"location":"documents/#working-with-data","title":"Working with Data","text":""},{"location":"documents/#checking-a-documents-properties","title":"Checking a Document\u2019s Properties","text":"

      To check whether a given property exists in the document, use the Document.contains(key: String) method.

      If you try to access a property which doesn\u2019t exist in the document, the call will return the default value for that getter method (0 for Document.getInt(), 0.0 for Document.getFloat(), etc.).

      "},{"location":"documents/#date-accessors","title":"Date accessors","text":"

      Couchbase Lite offers Date accessors as a convenience. Dates are a common data type, but JSON doesn\u2019t natively support them, so the convention is to store them as strings in ISO-8601 format.

      Example 1. Date Getter

      This example sets the date on the createdAt property and reads it back using the Document.getDate() accessor method.

      doc.setValue(\"createdAt\", Clock.System.now())\nval date = doc.getDate(\"createdAt\")\n
      "},{"location":"documents/#using-dictionaries","title":"Using Dictionaries","text":"

      API References

      • Dictionary
      • MutableDictionary

      Example 2. Read Only

      // NOTE: No error handling, for brevity (see getting started)\nval document = collection.getDocument(\"doc1\")\n\n// Getting a dictionary from the document's properties\nval dict = document?.getDictionary(\"address\")\n\n// Access a value with a key from the dictionary\nval street = dict?.getString(\"street\")\n\n// Iterate dictionary\ndict?.forEach { key ->\n    println(\"Key $key = ${dict.getValue(key)}\")\n}\n\n// Create a mutable copy\nval mutableDict = dict?.toMutable()\n

      Example 3. Mutable

      // NOTE: No error handling, for brevity (see getting started)\n\n// Create a new mutable dictionary and populate some keys/values\nval mutableDict = MutableDictionary()\nmutableDict.setString(\"street\", \"1 Main st.\")\nmutableDict.setString(\"city\", \"San Francisco\")\n\n// Add the dictionary to a document's properties and save the document\nval mutableDoc = MutableDocument(\"doc1\")\nmutableDoc.setDictionary(\"address\", mutableDict)\ncollection.save(mutableDoc)\n
      "},{"location":"documents/#using-arrays","title":"Using Arrays","text":"

      API References

      • Array
      • MutableArray

      Example 4. Read Only

      // NOTE: No error handling, for brevity (see getting started)\n\nval document = collection.getDocument(\"doc1\")\n\n// Getting a phones array from the document's properties\nval array = document?.getArray(\"phones\")\n\n// Get element count\nval count = array?.count\n\n// Access an array element by index\nval phone = array?.getString(1)\n\n// Iterate array\narray?.forEachIndexed { index, item ->\n    println(\"Row $index = $item\")\n}\n\n// Create a mutable copy\nval mutableArray = array?.toMutable()\n

      Example 5. Mutable

      // NOTE: No error handling, for brevity (see getting started)\n\n// Create a new mutable array and populate data into the array\nval mutableArray = MutableArray()\nmutableArray.addString(\"650-000-0000\")\nmutableArray.addString(\"650-000-0001\")\n\n// Set the array to document's properties and save the document\nval mutableDoc = MutableDocument(\"doc1\")\nmutableDoc.setArray(\"phones\", mutableArray)\ncollection.save(mutableDoc)\n
      "},{"location":"documents/#using-blobs","title":"Using Blobs","text":"

      For more on working with blobs, see Blobs.

      "},{"location":"documents/#document-initializers","title":"Document Initializers","text":"

      You can use the following methods/initializers:

      • Use the MutableDocument() initializer to create a new document where the document ID is randomly generated by the database.
      • Use the MutableDocument(id: String?) initializer to create a new document with a specific ID.
      • Use the Collection.getDocument() method to get a document. If the document doesn\u2019t exist in the collection, the method will return null. You can use this behavior to check if a document with a given ID already exists in the collection.

      Example 6. Persist a document

      val doc = MutableDocument()\ndoc.apply {\n    setString(\"type\", \"task\")\n    setString(\"owner\", \"todo\")\n    setDate(\"createdAt\", Clock.System.now())\n}\ncollection.save(doc)\n

      Tip

      The Kotbase KTX extensions provide a document builder DSL:

      val doc = MutableDocument {\n    \"type\" to \"task\"\n    \"owner\" to \"todo\"\n    \"createdAt\" to Clock.System.now()\n}\ndatabase.save(doc)\n
      "},{"location":"documents/#mutability","title":"Mutability","text":"

      By default, a document is immutable when it is read from the database. Use Document.toMutable() to create an updatable instance of the document.

      Example 7. Make a mutable document

      Changes to the document are persisted to the database when the save method is called.

      collection.getDocument(\"xyz\")?.toMutable()?.let {\n    it.setString(\"name\", \"apples\")\n    collection.save(it)\n}\n

      Note

      Any user change to the value of reserved keys (_id, _rev, or _deleted) will be detected when a document is saved and will result in an exception (Error Code 5 \u2014 CorruptRevisionData) \u2014 see also Document Constraints.

      "},{"location":"documents/#batch-operations","title":"Batch operations","text":"

      If you\u2019re making multiple changes to a database at once, it\u2019s faster to group them together. The following example persists a few documents in batch.

      Example 8. Batch operations

      database.inBatch {\n    for (i in 0..9) {\n        val doc = MutableDocument()\n        doc.apply {\n            setValue(\"type\", \"user\")\n            setValue(\"name\", \"user $i\")\n            setBoolean(\"admin\", false)\n        }\n        collection.save(doc)\n        println(\"saved user document: ${doc.getString(\"name\")}\")\n    }\n}\n

      At the local level this operation is still transactional: no other Database instances, including ones managed by the replicator, can make changes during the execution of the block, and other instances will not see partial changes. But Couchbase Mobile is a distributed system, and due to the way replication works, there\u2019s no guarantee that Sync Gateway or other devices will receive your changes all at once.

      "},{"location":"documents/#document-change-events","title":"Document change events","text":"

      You can register for document changes. The following example registers for changes to the document with ID user.john and prints the verified_account property when a change is detected.

      Example 9. Document change events

      collection.addDocumentChangeListener(\"user.john\") { change ->\n    collection.getDocument(change.documentID)?.let {\n        println(\"Status: ${it.getString(\"verified_account\")}\")\n    }\n}\n
      "},{"location":"documents/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

      Kotlin users can also take advantage of Flows to monitor for changes.

      The following methods show how to watch for document changes in a given collection or for changes to a specific document.

      Collection ChangesDocument Changes
      val collChanges: Flow<List<String>> = collection.collectionChangeFlow()\n    .map { it.documentIDs }\n
      val docChanges: Flow<DocumentChange> = collection.documentChangeFlow(\"1001\")\n    .mapNotNull { change ->\n        change.takeUnless {\n            collection.getDocument(it.documentID)?.getString(\"owner\").equals(owner)\n        }\n    }\n
      "},{"location":"documents/#document-expiration","title":"Document Expiration","text":"

      Document expiration allows users to set the expiration date for a document. When the document expires, it is purged from the database. The purge is not replicated to Sync Gateway.

      Example 10. Set document expiration

      This example sets the TTL for a document to 1 day from the current time.

      // Purge the document one day from now\ncollection.setDocumentExpiration(\n    \"doc123\",\n    Clock.System.now() + 1.days\n)\n\n// Reset expiration\ncollection.setDocumentExpiration(\"doc1\", null)\n\n// Query documents that will be expired in less than five minutes\nval query = QueryBuilder\n    .select(SelectResult.expression(Meta.id))\n    .from(DataSource.collection(collection))\n    .where(\n        Meta.expiration.lessThan(\n            Expression.longValue((Clock.System.now() + 5.minutes).toEpochMilliseconds())\n        )\n    )\n
      "},{"location":"documents/#document-constraints","title":"Document Constraints","text":"

      Couchbase Lite APIs do not explicitly disallow the use of attributes with the underscore prefix at the top level of document. This is to facilitate the creation of documents for use either in local only mode where documents are not synced, or when used exclusively in peer-to-peer sync.

      Note

      \"_id\", :\"_rev\" and \"_sequence\" are reserved keywords and must not be used as top-level attributes \u2014 see Example 11.

      Users are cautioned that any attempt to sync such documents to Sync Gateway will result in an error. To be future-proof, you are advised to avoid creating such documents. Use of these attributes for user-level data may result in undefined system behavior.

      For more guidance \u2014 see Sync Gateway - data modeling guidelines

      Example 11. Reserved Keys List

      • _attachments
      • _deleted 1
      • _id 1
      • _removed
      • _rev 1
      • _sequence
      "},{"location":"documents/#working-with-json-data","title":"Working with JSON Data","text":"

      In this section Arrays | Blobs | Dictionaries | Documents | Query Results as JSON

      The toJSON() typed-accessor means you can easily work with JSON data, native and Couchbase Lite objects.

      "},{"location":"documents/#arrays","title":"Arrays","text":"

      Convert an Array to and from JSON using the toJSON() and toList() methods \u2014 see Example 12.

      Additionally, you can:

      • Initialize a MutableArray using data supplied as a JSON string. This is done using the MutableArray(json: String) constructor \u2014 see Example 12.
      • Set data with a JSON string using setJSON().

      Example 12. Arrays as JSON strings

      // JSON String -- an Array (3 elements. including embedded arrays)\nval jsonString = \"\"\"[{\"id\":\"1000\",\"type\":\"hotel\",\"name\":\"Hotel Ted\",\"city\":\"Paris\",\"country\":\"France\",\"description\":\"Undefined description for Hotel Ted\"},{\"id\":\"1001\",\"type\":\"hotel\",\"name\":\"Hotel Fred\",\"city\":\"London\",\"country\":\"England\",\"description\":\"Undefined description for Hotel Fred\"},{\"id\":\"1002\",\"type\":\"hotel\",\"name\":\"Hotel Ned\",\"city\":\"Balmain\",\"country\":\"Australia\",\"description\":\"Undefined description for Hotel Ned\",\"features\":[\"Cable TV\",\"Toaster\",\"Microwave\"]}]\"\"\"\n\n// initialize array from JSON string\nval mArray = MutableArray(jsonString)\n\n// Create and save new document using the array\nfor (i in 0 ..< mArray.count) {\n    mArray.getDictionary(i)?.apply {\n        println(getString(\"name\") ?: \"unknown\")\n        collection.save(MutableDocument(getString(\"id\"), toMap()))\n    }\n}\n\n// Get an array from the document as a JSON string\ncollection.getDocument(\"1002\")?.getArray(\"features\")?.apply {\n    // Print its elements\n    for (feature in toList()) {\n        println(\"$feature\")\n    }\n    println(toJSON())\n}\n
      "},{"location":"documents/#blobs","title":"Blobs","text":"

      Convert a Blob to JSON using the toJSON() method \u2014 see Example 13.

      You can use isBlob() to check whether a given dictionary object is a blob or not \u2014 see Example 13.

      Note that the blob object must first be saved to the database (generating the required metadata) before you can use the toJSON() method.

      Example 13. Blobs as JSON strings

      val thisBlob = collection.getDocument(\"thisdoc-id\")!!.toMap()\nif (!Blob.isBlob(thisBlob)) {\n    return\n}\nval blobType = thisBlob[\"content_type\"].toString()\nval blobLength = thisBlob[\"length\"] as Number?\n

      See also: Blobs

      "},{"location":"documents/#dictionaries","title":"Dictionaries","text":"

      Convert a Dictionary to and from JSON using the toJSON() and toMap() methods \u2014 see Example 14.

      Additionally, you can:

      • Initialize a MutableDictionary using data supplied as a JSON string. This is done using the MutableDictionary(json: String) constructor \u2014 see Example 14.
      • Set data with a JSON string using setJSON().

      Example 14. Dictionaries as JSON strings

      val jsonString = \"\"\"{\"id\":\"1002\",\"type\":\"hotel\",\"name\":\"Hotel Ned\",\"city\":\"Balmain\",\"country\":\"Australia\",\"description\":\"Undefined description for Hotel Ned\",\"features\":[\"Cable TV\",\"Toaster\",\"Microwave\"]}\"\"\"\n\nval mDict = MutableDictionary(jsonString)\nprintln(\"$mDict\")\nprintln(\"Details for: ${mDict.getString(\"name\")}\")\nmDict.forEach { key ->\n    println(key + \" => \" + mDict.getValue(key))\n}\n
      "},{"location":"documents/#documents","title":"Documents","text":"

      Convert a Document to and from JSON strings using the toJSON() and toMap() methods \u2014 see Example 15.

      Additionally, you can:

      • Initialize a MutableDocument using data supplied as a JSON string. This is done using the MutableDocument(id: String?, json: String) constructor \u2014 see Example 15.
      • Set data with a JSON string using setJSON().

      Example 15. Documents as JSON strings

      QueryBuilder\n    .select(SelectResult.expression(Meta.id).`as`(\"metaId\"))\n    .from(DataSource.collection(srcColl))\n    .execute()\n    .forEach {\n        it.getString(\"metaId\")?.let { thisId ->\n            srcColl.getDocument(thisId)?.toJSON()?.let { json ->\n                println(\"JSON String = $json\")\n                val hotelFromJSON = MutableDocument(thisId, json)\n                dstColl.save(hotelFromJSON)\n                dstColl.getDocument(thisId)?.toMap()?.forEach { e ->\n                    println(\"${e.key} => ${e.value}\")\n                }\n            }\n        }\n    }\n
      "},{"location":"documents/#query-results-as-json","title":"Query Results as JSON","text":"

      Convert a query Result to a JSON string using its toJSON() accessor method. The JSON string can easily be serialized or used as required in your application. See Example 16 for a working example using kotlinx-serialization.

      Example 16. Using JSON Results

      // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval hotels = mutableListOf<Hotel>()\n\nval query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"type\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n\nquery.execute().use { rs ->\n    rs.forEach {\n\n        // Get result as JSON string\n        val json = it.toJSON()\n\n        // Get JsonObject map from JSON string\n        val mapFromJsonString = Json.decodeFromString<JsonObject>(json)\n\n        // Use created JsonObject map\n        val hotelId = mapFromJsonString[\"id\"].toString()\n        val hotelType = mapFromJsonString[\"type\"].toString()\n        val hotelName = mapFromJsonString[\"name\"].toString()\n\n        // Get custom object from JSON string\n        val hotel = Json.decodeFromString<Hotel>(json)\n        hotels.add(hotel)\n    }\n}\n
      "},{"location":"documents/#json-string-format","title":"JSON String Format","text":"

      If your query selects ALL then the JSON format will be:

      {\n  database-name: {\n    key1: \"value1\",\n    keyx: \"valuex\"\n  }\n}\n

      If your query selects a sub-set of available properties then the JSON format will be:

      {\n  key1: \"value1\",\n  keyx: \"valuex\"\n}\n
      1. Any change to this reserved key will be detected when it is saved and will result in a Couchbase exception (Error Code 5 \u2014 CorruptRevisionData)\u00a0\u21a9\u21a9\u21a9

      "},{"location":"full-text-search/","title":"Full Text Search","text":"

      Couchbase Lite database data querying concepts \u2014 full text search

      "},{"location":"full-text-search/#overview","title":"Overview","text":"

      To run a full-text search (FTS) query, you must create a full-text index on the expression being matched. Unlike regular queries, the index is not optional.

      You can choose to use SQL++ or QueryBuilder syntaxes to create and use FTS indexes.

      The following examples use the data model introduced in Indexing. They create and use an FTS index built from the hotel\u2019s overview text.

      "},{"location":"full-text-search/#sql","title":"SQL++","text":""},{"location":"full-text-search/#create-index","title":"Create Index","text":"

      SQL++ provides a configuration object to define Full Text Search indexes \u2014 FullTextIndexConfiguration.

      Example 1. Using SQL++'s FullTextIndexConfiguration

      collection.createIndex(\n    \"overviewFTSIndex\",\n    FullTextIndexConfiguration(\"overview\")\n)\n
      "},{"location":"full-text-search/#use-index","title":"Use Index","text":"

      Full-text search is enabled using the SQL++ match() function.

      With the index created, you can construct and run a full-text search (FTS) query using the indexed properties.

      The index will omit a set of common words, to avoid words like \"I\", \"the\", and \"an\" from overly influencing your queries. See full list of these stop words.

      The following example finds all hotels mentioning Michigan in their overview text.

      Example 2. Using SQL++ Full Text Search

      val ftsQuery = database.createQuery(\n    \"SELECT _id, overview FROM _ WHERE MATCH(overviewFTSIndex, 'michigan') ORDER BY RANK(overviewFTSIndex)\"\n)\nftsQuery.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"${it.getString(\"id\")}: ${it.getString(\"overview\")}\")\n    }\n}\n
      "},{"location":"full-text-search/#querybuilder","title":"QueryBuilder","text":""},{"location":"full-text-search/#create-index_1","title":"Create Index","text":"

      The following example creates an FTS index on the overview property.

      Example 3. Using the IndexBuilder method

      collection.createIndex(\n    \"overviewFTSIndex\",\n    IndexBuilder.fullTextIndex(FullTextIndexItem.property(\"overview\"))\n)\n
      "},{"location":"full-text-search/#use-index_1","title":"Use Index","text":"

      With the index created, you can construct and run a full-text search (FTS) query using the indexed properties.

      The following example finds all hotels mentioning Michigan in their overview text.

      Example 4. Using QueryBuilder Full Text Search

      val ftsQuery = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"overview\")\n    )\n    .from(DataSource.collection(collection))\n    .where(FullTextFunction.match(\"overviewFTSIndex\", \"michigan\"))\n\nftsQuery.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"${it.getString(\"Meta.id\")}: ${it.getString(\"overview\")}\")\n    }\n}\n
      "},{"location":"full-text-search/#operation","title":"Operation","text":"

      In the examples above, the pattern to match is a word, the full-text search query matches all documents that contain the word \"michigan\" in the value of the doc.overview property.

      Search is supported for all languages that use whitespace to separate words.

      Stemming, which is the process of fuzzy matching parts of speech, like \"fast\" and \"faster\", is supported in the following languages: Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish and Turkish.

      "},{"location":"full-text-search/#pattern-matching-formats","title":"Pattern Matching Formats","text":"

      As well as providing specific words or strings to match against, you can provide the pattern to match in these formats.

      "},{"location":"full-text-search/#prefix-queries","title":"Prefix Queries","text":"

      The query expression used to search for a term prefix is the prefix itself with a \"*\" character appended to it.

      Example 5. Prefix query

      Query for all documents containing a term with the prefix \"lin\".

      \"lin*\"\n

      This will match:

      • All documents that contain \"linux\"
      • And \u2026 those that contain terms \"linear\", \"linker\", \"linguistic\", and so on.
      "},{"location":"full-text-search/#overriding-the-property-name","title":"Overriding the Property Name","text":"

      Normally, a token or token prefix query is matched against the document property specified as the left-hand side of the match operator. This may be overridden by specifying a property name followed by a \":\" character before a basic term query. There may be space between the \":\" and the term to query for, but not between the property name and the \":\" character.

      Example 6. Override indexed property name

      Query the database for documents for which the term \"linux\" appears in the document title, and the term \"problems\" appears in either the title or body of the document.

      'title:linux problems'\n
      "},{"location":"full-text-search/#phrase-queries","title":"Phrase Queries","text":"

      A phrase query is one that retrieves all documents containing a nominated set of terms or term prefixes in a specified order with no intervening tokens.

      Phrase queries are specified by enclosing a space separated sequence of terms or term prefixes in double quotes (\").

      Example 7. Phrase query

      Query for all documents that contain the phrase \"linux applications\".

      \"linux applications\"\n
      "},{"location":"full-text-search/#near-queries","title":"NEAR Queries","text":"

      A NEAR query is a query that returns documents that contain two or more nominated terms or phrases within a specified proximity of each other (by default with 10 or less intervening terms). A NEAR query is specified by putting the keyword \"NEAR\" between two phrases, tokens or token prefix queries. To specify a proximity other than the default, an operator of the form \"NEAR/<number>\" may be used, where <number> is the maximum number of intervening terms allowed.

      Example 8. Near query

      Search for a document that contains the phrase \"replication\" and the term \"database\" with not more than 2 terms separating the two.

      \"database NEAR/2 replication\"\n
      "},{"location":"full-text-search/#and-or-not-query-operators","title":"AND, OR & NOT Query Operators","text":"

      The enhanced query syntax supports the AND, OR and NOT binary set operators. Each of the two operands to an operator may be a basic FTS query, or the result of another AND, OR or NOT set operation. Operators must be entered using capital letters. Otherwise, they are interpreted as basic term queries instead of set operators.

      Example 9. Using And, Or and Not

      Return the set of documents that contain the term \"couchbase\", and the term \"database\".

      \"couchbase AND database\"\n
      "},{"location":"full-text-search/#operator-precedence","title":"Operator Precedence","text":"

      When using the enhanced query syntax, parenthesis may be used to specify the precedence of the various operators.

      Example 10. Operator precedence

      Query for the set of documents that contains the term \"linux\", and at least one of the phrases \"couchbase database\" and \"sqlite library\".

      '(\"couchbase database\" OR \"sqlite library\") AND \"linux\"'\n
      "},{"location":"full-text-search/#ordering-results","title":"Ordering Results","text":"

      It\u2019s very common to sort full-text results in descending order of relevance. This can be a very difficult heuristic to define, but Couchbase Lite comes with a ranking function you can use.

      In the OrderBy array, use a string of the form Rank(X), where X is the property or expression being searched, to represent the ranking of the result.

      "},{"location":"getting-started/","title":"Build and Run","text":"

      Build and run a starter app using Kotbase

      "},{"location":"getting-started/#introduction","title":"Introduction","text":"

      The Getting Started app is a very basic Kotlin Multiplatform app that demonstrates using Kotbase in a shared Kotlin module with native apps on each of the supported platforms.

      You can access the getting-started and getting-started-compose projects in the git repository under examples.

      Quick Steps

      1. Get the project and open it in Android Studio
      2. Build it
      3. Run any of the platform apps
      4. Enter some input and press \"Run database work\" The log output, in the app's UI or console panel, will show output similar to that in Figure 1
      5. That\u2019s it.

      Figure 1: Example app output

      01-13 11:35:03.733 I/SHARED_KOTLIN: Database created: Database{@@0x9645222: 'desktopApp-db'}\n01-13 11:35:03.742 I/SHARED_KOTLIN: Collection created: desktopApp-db@@x7fba7630dcb0._default.example-coll\n01-13 11:35:03.764 I/DESKTOP_APP: Created document :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.767 I/SHARED_KOTLIN: Retrieved document:\n01-13 11:35:03.767 I/SHARED_KOTLIN: Document ID :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.767 I/SHARED_KOTLIN: Learning :: Kotlin\n01-13 11:35:03.768 I/DESKTOP_APP: Updated document :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.785 I/SHARED_KOTLIN: Number of rows :: 1\n01-13 11:35:03.789 I/SHARED_KOTLIN: Document ID :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.790 I/SHARED_KOTLIN: Document :: {\"language\":\"Kotlin\",\"version\":2.0,\"platform\":\"JVM 21.0.1\",\"input\":\"Hello, Kotbase!\"}\n
      "},{"location":"getting-started/#getting-started-app","title":"Getting Started App","text":"

      The Getting Started app shows examples of the essential Couchbase Lite CRUD operations, including:

      • Create a database
      • Create a collection
      • Create a document
      • Retrieve a document
      • Update a document
      • Query documents
      • Create and run a replicator

      Whilst no exemplar of a real application, it will give you a good idea how to get started using Kotbase and Kotlin Multiplatform.

      "},{"location":"getting-started/#shared-kotlin-native-ui","title":"Shared Kotlin + Native UI","text":"

      The getting-started version demonstrates using shared Kotlin code using Kotbase together with native app UIs.

      The Kotbase database examples are in the shared module, which is shared between each of the platform apps.

      "},{"location":"getting-started/#android-app","title":"Android App","text":"

      The Android app is in the androidApp module. It uses XML views for its UI.

      Run Android StudioCommand Line

      Run the androidApp run configuration.

      Install

      ./gradlew :androidApp:installDebug\n
      Run
      adb shell am start -n dev.kotbase.gettingstarted/.MainActivity\n

      "},{"location":"getting-started/#ios-app","title":"iOS App","text":"

      The iOS app is in the iosApp directory. It is an Xcode project and uses SwiftUI for its UI.

      Run Android StudioXcode

      With the Kotlin Multiplatform Mobile plugin run the iosApp run configuration.

      Open iosApp/iosApp.xcodeproj and run the iosApp scheme.

      "},{"location":"getting-started/#jvm-desktop-app","title":"JVM Desktop App","text":"

      The JVM desktop app is in the desktopApp module. It uses Compose UI for its UI.

      Run Android StudioCommand Line

      Run the desktopApp run configuration.

      ./gradlew :desktopApp:run\n
      "},{"location":"getting-started/#native-cli-app","title":"Native CLI App","text":"

      The native app is in the cliApp module. It uses a command-line interface (CLI) on macOS, Linux, and Windows.

      The app takes two command-line arguments, first the \"input\" value, written to the document on update, and second true or false for whether to run the replicator. These arguments can also be passed as gradle properties.

      Run Android StudioCommand Line

      Run the cliApp run configuration.

      ./gradlew :cliApp:runDebugExecutableNative -PinputValue=\"\" -Preplicate=false\n
      or Build
      ./gradlew :cliApp:linkDebugExecutableNative\n
      Run
      cliApp/build/bin/native/debugExecutable/cliApp.kexe \"<input value>\" <true|false>\n

      "},{"location":"getting-started/#share-everything-in-kotlin","title":"Share Everything in Kotlin","text":"

      The getting-started-compose version demonstrates sharing the entirety of the application code in Kotlin, including the UI with Compose Multiplatform.

      The entire compose app is a single Kotlin multiplatform module, encompassing all platforms, with an additional Xcode project for the iOS app.

      "},{"location":"getting-started/#android-app_1","title":"Android App","text":"Run Android StudioCommand Line

      Run the androidApp run configuration.

      Install

      ./gradlew :composeApp:installDebug\n
      Start
      adb shell am start -n dev.kotbase.gettingstarted.compose/.MainActivity\n

      "},{"location":"getting-started/#ios-app_1","title":"iOS App","text":"Run Android StudioXcode

      With the Kotlin Multiplatform Mobile plugin run the iosApp run configuration.

      Open iosApp/iosApp.xcworkspace and run the iosApp scheme.

      Important

      Be sure to open iosApp.xcworkspace and not iosApp.xcodeproj. The getting-started-compose iosApp uses CocoaPods and the CocoaPods Gradle plugin to add the shared library dependency. The .xcworkspace includes the CocoaPods dependencies.

      Note

      Compose Multiplatform no longer requires CocoaPods for copying resources since version 1.5.0. However, the getting-started-compose example still uses CocoaPods for linking the Couchbase Lite framework. See the getting-started version for an example of how to link the Couchbase Lite framework without using CocoaPods.

      "},{"location":"getting-started/#jvm-desktop-app_1","title":"JVM Desktop App","text":"Run Android StudioCommand Line

      Run the desktopApp run configuration.

      ./gradlew :composeApp:run\n
      "},{"location":"getting-started/#sync-gateway-replication","title":"Sync Gateway Replication","text":"

      Using the apps with Sync Gateway and Couchbase Server obviously requires you have, or install, working versions of both. See also \u2014 Install Sync Gateway

      Once you have Sync Gateway configured, update the ReplicatorConfiguration in the app with the server's URL endpoint and authentication credentials.

      "},{"location":"getting-started/#kotlin-multiplatform-tips","title":"Kotlin Multiplatform Tips","text":""},{"location":"getting-started/#calling-platform-specific-apis","title":"Calling Platform-specific APIs","text":"

      The apps utilize the Kotlin Multiplatform expect/actual feature to populate the created document with the platform the app is running on.

      See common expect fun getPlatform() and actual fun getPlatform() for Android, iOS, JVM, Linux, macOS, and Windows.

      "},{"location":"getting-started/#using-coroutines-in-swift","title":"Using Coroutines in Swift","text":"

      The getting-started app uses KMP-NativeCoroutines to consume Kotlin Flows in Swift. See @NativeCoroutines annotation in Kotlin and asyncSequence(for:) in Swift code.

      "},{"location":"getting-started/#kotbase-library-source","title":"Kotbase Library Source","text":"

      The apps can get the Kotbase library dependency either from its published Maven artifact or build the library locally from the source repository. Set the useLocalLib property in gradle.properties to true to build the library from source, otherwise the published artifact from Maven Central will be used.

      "},{"location":"handling-data-conflicts/","title":"Handling Data Conflicts","text":"

      Couchbase Lite Database Sync \u2014 handling conflict between data changes

      "},{"location":"handling-data-conflicts/#causes-of-conflicts","title":"Causes of Conflicts","text":"

      Document conflicts can occur if multiple changes are made to the same version of a document by multiple peers in a distributed system. For Couchbase Mobile, this can be a Couchbase Lite or Sync Gateway database instance.

      Such conflicts can occur after either of the following events:

      • A replication saves a document change \u2014 in which case the change with the most-revisions wins (unless one change is a delete). See Conflicts when Replicating
      • An application saves a document change directly to a database instance \u2014 in which case, last write wins, unless one change is a delete \u2014 see Conflicts when Updating

      Note

      Deletes always win. So, in either of the above cases, if one of the changes was a delete then that change wins.

      The following sections discuss each scenario in more detail.

      Dive deeper \u2026

      Read more about Document Conflicts and Automatic Conflict Resolution in Couchbase Mobile.

      "},{"location":"handling-data-conflicts/#conflicts-when-replicating","title":"Conflicts when Replicating","text":"

      There\u2019s no practical way to prevent a conflict when incompatible changes to a document are be made in multiple instances of an app. The conflict is realized only when replication propagates the incompatible changes to each other.

      Example 1. A typical replication conflict scenario

      1. Molly uses her device to create DocumentA.
      2. Replication syncs DocumentA to Naomi\u2019s device.
      3. Molly uses her device to apply ChangeX to DocumentA.
      4. Naomi uses her device to make a different change, ChangeY, to DocumentA.
      5. Replication syncs ChangeY to Molly\u2019s device. This device already has ChangeX putting the local document in conflict.
      6. Replication syncs ChangeX to Naomi\u2019s device. This device already has ChangeY and now Naomi\u2019s local document is in conflict.
      "},{"location":"handling-data-conflicts/#automatic-conflict-resolution","title":"Automatic Conflict Resolution","text":"

      Note

      The rules only apply to conflicts caused by replication. Conflict resolution takes place exclusively during pull replication, while push replication remains unaffected.

      Couchbase Lite uses the following rules to handle conflicts such as those described in A typical replication conflict scenario:

      • If one of the changes is a deletion: A deleted document (that is, a tombstone) always wins over a document update.
      • If both changes are document changes: The change with the most revisions will win. Since each change creates a revision with an ID prefixed by an incremented version number, the winner is the change with the highest version number.

      The result is saved internally by the Couchbase Lite replicator. Those rules describe the internal behavior of the replicator. For additional control over the handling of conflicts, including when a replication is in progress, see Custom Conflict Resolution.

      "},{"location":"handling-data-conflicts/#custom-conflict-resolution","title":"Custom Conflict Resolution","text":"

      Starting in Couchbase Lite 2.6, application developers who want more control over how document conflicts are handled can use custom logic to select the winner between conflicting revisions of a document.

      If a custom conflict resolver is not provided, the system will automatically resolve conflicts as discussed in Automatic Conflict Resolution, and as a consequence there will be no conflicting revisions in the database.

      Caution

      While this is true of any user defined functions, app developers must be strongly cautioned against writing sub-optimal custom conflict handlers that are time consuming and could slow down the client\u2019s save operations.

      To implement custom conflict resolution during replication, you must implement the following steps:

      1. Conflict Resolver
      2. Configure the Replicator
      "},{"location":"handling-data-conflicts/#conflict-resolver","title":"Conflict Resolver","text":"

      Apps have the following strategies for resolving conflicts:

      • Local Wins: The current revision in the database wins.
      • Remote Wins: The revision pulled from the remote endpoint through replication wins.
      • Merge: Merge the content bodies of the conflicting revisions.

      Example 2. Using conflict resolvers

      Local WinsRemote WinsMerge
      val localWinsResolver: ConflictResolver = { conflict ->\n    conflict.localDocument\n}\nconfig.conflictResolver = localWinsResolver\n
      val remoteWinsResolver: ConflictResolver = { conflict ->\n    conflict.remoteDocument\n}\nconfig.conflictResolver = remoteWinsResolver\n
      val mergeConflictResolver: ConflictResolver = { conflict ->\n    val localDoc = conflict.localDocument?.toMap()?.toMutableMap()\n    val remoteDoc = conflict.remoteDocument?.toMap()?.toMutableMap()\n\n    val merge: MutableMap<String, Any?>?\n    if (localDoc == null) {\n        merge = remoteDoc\n    } else {\n        merge = localDoc\n        if (remoteDoc != null) {\n            merge.putAll(remoteDoc)\n        }\n    }\n\n    if (merge == null) {\n        MutableDocument(conflict.documentId)\n    } else {\n        MutableDocument(conflict.documentId, merge)\n    }\n}\nconfig.conflictResolver = mergeConflictResolver\n

      When a null document is returned by the resolver, the conflict will be resolved as a document deletion.

      "},{"location":"handling-data-conflicts/#important-guidelines-and-best-practices","title":"Important Guidelines and Best Practices","text":"

      Points of Note:

      • If you have multiple replicators, it is recommended that instead of distinct resolvers, you should use a unified conflict resolver across all replicators. Failure to do so could potentially lead to data loss under exception cases or if the app is terminated (by the user or an app crash) while there are pending conflicts.
      • If the document ID of the document returned by the resolver does not correspond to the document that is in conflict then the replicator will log a warning message.

      Important

      Developers are encouraged to review the warnings and fix the resolver to return a valid document ID.

      • If a document from a different database is returned, the replicator will treat it as an error. A document replication event will be posted with an error and an error message will be logged.

      Important

      Apps are encouraged to observe such errors and take appropriate measures to fix the resolver function.

      • When the replicator is stopped, the system will attempt to resolve outstanding and pending conflicts before stopping. Hence, apps should expect to see some delay when attempting to stop the replicator depending on the number of outstanding documents in the replication queue and the complexity of the resolver function.
      • If there is an exception thrown in the ConflictResolver function, the exception will be caught and handled:
        • The conflict to resolve will be skipped. The pending conflicted documents will be resolved when the replicator is restarted.
        • The exception will be reported in the warning logs.
        • The exception will be reported in the document replication event.

      Important

      While the system will handle exceptions in the manner specified above, it is strongly encouraged for the resolver function to catch exceptions and handle them in a way appropriate to their needs.

      "},{"location":"handling-data-conflicts/#configure-the-replicator","title":"Configure the Replicator","text":"

      The implemented custom conflict resolver can be registered on the ReplicatorConfiguration object. The default value of the conflictResolver is null. When the value is null, the default conflict resolution will be applied.

      Example 3. A Conflict Resolver

      val collectionConfig = CollectionConfigurationFactory.newConfig(conflictResolver = localWinsResolver)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(srcCollections to collectionConfig)\n    )\n)\n\n// Start the replicator\n// (be sure to hold a reference somewhere that will prevent it from being GCed)\nrepl.start()\nthis.replicator = repl\n
      "},{"location":"handling-data-conflicts/#conflicts-when-updating","title":"Conflicts when Updating","text":"

      When updating a document, you need to consider the possibility of update conflicts. Update conflicts can occur when you try to update a document that\u2019s been updated since you read it.

      Example 4. How Updating May Cause Conflicts

      Here\u2019s a typical sequence of events that would create an update conflict:

      1. Your code reads the document\u2019s current properties, and constructs a modified copy to save.
      2. Another thread (perhaps the replicator) updates the document, creating a new revision with different properties.
      3. Your code updates the document with its modified properties, for example using Collection.save(MutableDocument).
      "},{"location":"handling-data-conflicts/#automatic-conflict-resolution_1","title":"Automatic Conflict Resolution","text":"

      In Couchbase Lite, by default, the conflict is automatically resolved and only one document update is stored in the database. The Last-Write-Win (LWW) algorithm is used to pick the winning update. So in effect, the changes from step 2 would be overwritten and lost.

      If the probability of update conflicts is high in your app, and you wish to avoid the possibility of overwritten data, the save() and delete() APIs provide additional method signatures with concurrency control:

      Save operations

      Collection.save(MutableDocument, ConcurrencyControl) \u2014 attempts to save the document with a concurrency control.

      The ConcurrencyControl parameter has two possible values:

      • LAST_WRITE_WINS (default): The last operation wins if there is a conflict.
      • FAIL_ON_CONFLICT: The operation will fail if there is a conflict. In this case, the app can detect the error that is being thrown, and handle it by re-reading the document, making the necessary conflict resolution, then trying again.

      Delete operations

      As with save operations, delete operations also have two method signatures, which specify how to handle a possible conflict:

      • Collection.delete(Document): The last write will win if there is a conflict.
      • Collection.delete(Document, ConcurrencyControl): attempts to delete the document with a concurrency control, with the same options described above.
      "},{"location":"handling-data-conflicts/#custom-conflict-handlers","title":"Custom Conflict Handlers","text":"

      Developers can hook a conflict handler when saving a document, so they can easily handle the conflict in a single save method call.

      To implement custom conflict resolution when saving a document, apps must call the save method with a conflict handler block (Collection.save(MutableDocument, ConflictHandler)).

      The following code snippet shows an example of merging properties from the existing document (curDoc) into the one being saved (newDoc). In the event of conflicting keys, it will pick the key value from newDoc.

      Example 5. Merging document properties

      val mutableDocument = collection.getDocument(\"xyz\")?.toMutable() ?: return\nmutableDocument.setString(\"name\", \"apples\")\ncollection.save(mutableDocument) { newDoc, curDoc ->\n    if (curDoc == null) {\n        return@save false\n    }\n    val dataMap: MutableMap<String, Any?> = curDoc.toMap().toMutableMap()\n    dataMap.putAll(newDoc.toMap())\n    newDoc.setData(dataMap)\n    true\n}\n
      "},{"location":"indexing/","title":"Indexing","text":"

      Couchbase Lite database data model concepts - indexes

      "},{"location":"indexing/#introduction","title":"Introduction","text":"

      Querying documents using a pre-existing database index is much faster because an index narrows down the set of documents to examine \u2014 see the Query Troubleshooting topic.

      When planning the indexes you need for your database, remember that while indexes make queries faster, they may also:

      • Make writes slightly slower, because each index must be updated whenever a document is updated
      • Make your Couchbase Lite database slightly larger

      Too many indexes may hurt performance. Optimal performance depends on designing and creating the right indexes to go along with your queries.

      Constraints

      Couchbase Lite does not currently support partial value indexes; indexes with non-property expressions. You should only index with properties that you plan to use in the query.

      "},{"location":"indexing/#creating-a-new-index","title":"Creating a new index","text":"

      You can use SQL++ or QueryBuilder syntaxes to create an index.

      Example 2 creates a new index for the type and name properties, shown in this data model:

      Example 1. Data Model

      {\n    \"_id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"The Michigander\",\n    \"overview\": \"Ideally situated for exploration of the Motor City and the wider state of Michigan. Tripadvisor rated the hotel ...\",\n    \"state\": \"Michigan\"\n}\n
      "},{"location":"indexing/#sql","title":"SQL++","text":"

      The code to create the index will look something like this:

      Example 2. Create index

      collection.createIndex(\n    \"TypeNameIndex\",\n    ValueIndexConfiguration(\"type\", \"name\")\n)\n
      "},{"location":"indexing/#querybuilder","title":"QueryBuilder","text":"

      Tip

      See the QueryBuilder topic to learn more about QueryBuilder.

      The code to create the index will look something like this:

      Example 3. Create index with QueryBuilder

      collection.createIndex(\n    \"TypeNameIndex\",\n    IndexBuilder.valueIndex(\n        ValueIndexItem.property(\"type\"),\n        ValueIndexItem.property(\"name\")\n    )\n)\n
      "},{"location":"installation/","title":"Installation","text":"

      Add the Kotbase dependency to your Kotlin Multiplatform project in the commonMain source set dependencies of your shared module's build.gradle.kts:

      Enterprise EditionCommunity Edition build.gradle.kts
      kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee:3.1.3-1.1.0\")\n        }\n    }\n}\n
      build.gradle.kts
      kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite:3.1.3-1.1.0\")\n        }\n    }\n}\n

      Note

      The Couchbase Lite Community Edition is free and open source. The Enterprise Edition is free for development and testing, but requires a license from Couchbase for production use. See Community vs Enterprise Edition.

      Kotbase is published to Maven Central. The Couchbase Lite Enterprise Edition dependency additionally requires the Couchbase Maven repository.

      Enterprise EditionCommunity Edition build.gradle.kts
      repositories {\n    mavenCentral()\n    maven(\"https://mobile.maven.couchbase.com/maven2/dev/\")\n}\n
      build.gradle.kts
      repositories {\n    mavenCentral()\n}\n
      "},{"location":"installation/#native-platforms","title":"Native Platforms","text":"

      Native platform targets should additionally link to the Couchbase Lite dependency native binary. See Supported Platforms for more details.

      "},{"location":"installation/#linux","title":"Linux","text":"

      Targeting JVM running on Linux or native Linux, both require a specific version of the libicu dependency. (You will see an error such as libLiteCore.so: libicuuc.so.71: cannot open shared object file: No such file or directory indicating the expected version.) If the required version isn't available from your distribution's package manager, you can download it from GitHub.

      "},{"location":"integrate-custom-listener/","title":"Integrate Custom Listener","text":"

      Couchbase Lite database peer-to-peer sync \u2014 integrate a custom-built listener

      "},{"location":"integrate-custom-listener/#overview","title":"Overview","text":"

      This is an Enterprise Edition feature.

      This content covers how to integrate a custom MessageEndpointListener solution with Couchbase Lite to handle the data transfer, which is the sending and receiving of data. Where applicable, we discuss how to integrate Couchbase Lite into the workflow.

      The following sections describe a typical Peer-to-Peer workflow.

      "},{"location":"integrate-custom-listener/#peer-discovery","title":"Peer Discovery","text":"

      Peer discovery is the first step. The communication framework will generally include a peer discovery API for devices to advertise themselves on the network and to browse for other peers.

      "},{"location":"integrate-custom-listener/#active-peer","title":"Active Peer","text":"

      The first step is to initialize the Couchbase Lite database.

      "},{"location":"integrate-custom-listener/#passive-peer","title":"Passive Peer","text":"

      In addition to initializing the database, the Passive Peer must initialize the MessageEndpointListener. The MessageEndpointListener acts as a listener for incoming connections.

      val listener = MessageEndpointListener(\n    MessageEndpointListenerConfigurationFactory.newConfig(collections, ProtocolType.MESSAGE_STREAM)\n)\n
      "},{"location":"integrate-custom-listener/#peer-selection-and-connection-setup","title":"Peer Selection and Connection Setup","text":"

      Once a peer device is found, the application code must decide whether it should establish a connection with that peer. This step includes inviting a peer to a session and peer authentication.

      This is handled by the Communication Framework.

      Once the remote peer has been authenticated, the next step is to connect with that peer and initialize the MessageEndpoint API.

      "},{"location":"integrate-custom-listener/#replication-setup","title":"Replication Setup","text":""},{"location":"integrate-custom-listener/#active-peer_1","title":"Active Peer","text":"

      When the connection is established, the Active Peer must instantiate a MessageEndpoint object corresponding to the remote peer.

      // The delegate must implement the `MessageEndpointDelegate` protocol.\nval messageEndpoint = MessageEndpoint(\"UID:123\", \"active\", ProtocolType.MESSAGE_STREAM, delegate)\n

      The MessageEndpoint constructor takes the following arguments:

      1. uid: A unique ID that represents the remote Active Peer.
      2. target: This represents the remote Passive Peer and could be any suitable representation of the remote peer. It could be an ID, URL, etc. If using the Multipeer Connectivity Framework, this could be the MCPeerID.
      3. protocolType: Specifies the kind of transport you intend to implement. There are two options:
        • The default (MESSAGE_STREAM) means that you want to \"send a series of messages\", or in other words the Communication Framework will control the formatting of messages so that there are clear boundaries between messages.
        • The alternative (BYTE_STREAM) means that you just want to send raw bytes over the stream and Couchbase should format for you to ensure that messages get delivered in full. Typically, the Communication Framework will handle message assembly and disassembly, so you would use the MESSAGE_STREAM option in most cases.
      4. delegate: The delegate that will implement the MessageEndpointDelegate protocol, which is a factory for MessageEndpointConnection.

      Then, a Replicator is instantiated with the initialized MessageEndpoint as the target.

      // Create the replicator object.\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        collections = mapOf(collections to null),\n        target = messageEndpoint\n    )\n)\n\n// Start the replication.\nrepl.start()\nthis.replicator = repl\n

      Next, Couchbase Lite will call back the application code through the MessageEndpointDelegate lambda. When the application receives the callback, it must create an instance of MessageEndpointConnection and return it.

      /* implementation of MessageEndpointDelegate */\nval delegate: MessageEndpointDelegate = { endpoint ->\n    ActivePeerConnection()\n}\n

      Next, Couchbase Lite will call back the application code through the MessageEndpointConnection.open() method.

      /* implementation of MessageEndpointConnection */\noverride fun open(connection: ReplicatorConnection, completion: MessagingCompletion) {\n    replicatorConnection = connection\n    completion(true, null)\n}\n

      The connection argument is then set on an instance variable. The application code must keep track of every ReplicatorConnection associated with every MessageEndpointConnection.

      The MessageError argument in the completion block specifies whether the error is recoverable or not. If it is a recoverable error, the replicator will begin a retry process, creating a new MessageEndpointConnection instance.

      "},{"location":"integrate-custom-listener/#passive-peer_1","title":"Passive Peer","text":"

      After connection establishment on the Passive Peer, the first step is to initialize a new MessageEndpointConnection and pass it to the listener. This message tells the listener to accept incoming data from that peer.

      /* implements MessageEndpointConnection */\nval connection = PassivePeerConnection()\nlistener?.accept(connection)\n

      listener is the instance of the MessageEndpointListener that was created in the first step (Peer Discovery ).

      Couchbase Lite will call the application code back through the MessageEndpointConnection.open() method.

      /* implementation of MessageEndpointConnection */\noverride fun open(connection: ReplicatorConnection, completion: MessagingCompletion) {\n    replicatorConnection = connection\n    completion(true, null)\n}\n

      The connection argument is then set on an instance variable. The application code must keep track of every ReplicatorConnection associated with every MessageEndpointConnection.

      At this point, the connection is established, and both peers are ready to exchange data.

      "},{"location":"integrate-custom-listener/#pushpull-replication","title":"Push/Pull Replication","text":"

      Typically, an application needs to send data and receive data. The directionality of the replication could be any of the following:

      • Push only: The data is pushed from the local database to the remote database.
      • Pull only: The data is pulled from the remote database to the local database.
      • Push and Pull: The data is exchanged both ways.

      Usually, the remote is a Sync Gateway database identified through a URL. In Peer-to-Peer syncing, the remote is another Couchbase Lite database.

      The replication lifecycle is handled through the MessageEndpointConnection.

      "},{"location":"integrate-custom-listener/#active-peer_2","title":"Active Peer","text":"

      When Couchbase Lite calls back the application code through the MessageEndpointConnection.send() method, you should send that data to the other peer using the Communication Framework.

      /* implementation of MessageEndpointConnection */\noverride fun send(message: Message, completion: MessagingCompletion) {\n    /* send the data to the other peer */\n    /* ... */\n    /* call the completion handler once the message is sent */\n    completion(true, null)\n}\n

      Once the data is sent, call the completion block to acknowledge the completion. You can use the MessageError in the completion block to specify whether the error is recoverable. If it is a recoverable error, the replicator will begin a retry process, creating a new MessageEndpointConnection.

      When data is received from the Passive Peer via the Communication Framework, you call the ReplicatorConnection.receive() method.

      replicatorConnection?.receive(message)\n

      The ReplicatorConnection\u2019s receive() method is called. Which then processes the data to persist to the local database.

      "},{"location":"integrate-custom-listener/#passive-peer_2","title":"Passive Peer","text":"

      As in the case of the Active Peer, the Passive Peer must implement the MessageEndpointConnection.send() method to send data to the other peer.

      /* implementation of MessageEndpointConnection */\noverride fun send(message: Message, completion: MessagingCompletion) {\n    /* send the data to the other peer */\n    /* ... */\n    /* call the completion handler once the message is sent */\n    completion(true, null)\n}\n

      Once the data is sent, call the completion block to acknowledge the completion. You can use the MessageError in the completion block to specify whether the error is recoverable. If it is a recoverable error, the replicator will begin a retry process, creating a new MessageEndpointConnection.

      When data is received from the Active Peer via the Communication Framework, you call the ReplicatorConnection.receive() method.

      replicatorConnection?.receive(message)\n
      "},{"location":"integrate-custom-listener/#connection-teardown","title":"Connection Teardown","text":"

      When a peer disconnects from a peer-to-peer network, all connected peers are notified. The disconnect notification is a good opportunity to close and remove a replication connection. The steps to tear down the connection are slightly different depending on whether the active or passive peer disconnects first. We will cover each case below.

      "},{"location":"integrate-custom-listener/#initiated-by-active-peer","title":"Initiated by Active Peer","text":""},{"location":"integrate-custom-listener/#active-peer_3","title":"Active Peer","text":"

      When an Active Peer disconnects, it must call the ReplicatorConnection.close() method.

      fun disconnect() {\n    replicatorConnection?.close(null)\n    replicatorConnection = null\n}\n

      Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

      override fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
      "},{"location":"integrate-custom-listener/#passive-peer_3","title":"Passive Peer","text":"

      When the Passive Peer receives the corresponding disconnect notification from the Communication Framework, it must call the ReplicatorConnection.close() method.

      replicatorConnection?.close(null)\n

      Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

      /* implementation of MessageEndpointConnection */\noverride fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
      "},{"location":"integrate-custom-listener/#initiated-by-passive-peer","title":"Initiated by Passive Peer","text":""},{"location":"integrate-custom-listener/#passive-peer_4","title":"Passive Peer","text":"

      When the Passive Peer disconnects, it must class the MessageEndpointListener.closeAll() method.

      listener?.closeAll()\n

      Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

      /* implementation of MessageEndpointConnection */\noverride fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
      "},{"location":"integrate-custom-listener/#active-peer_4","title":"Active Peer","text":"

      When the Active Peer receives the corresponding disconnect notification from the Communication Framework, it must call the ReplicatorConnection.close() method.

      fun disconnect() {\n    replicatorConnection?.close(null)\n    replicatorConnection = null\n}\n

      Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

      override fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
      "},{"location":"intra-device-sync/","title":"Intra-device Sync","text":"

      Couchbase Lite Database Sync - Synchronize changes between databases on the same device

      "},{"location":"intra-device-sync/#overview","title":"Overview","text":"

      This is an Enterprise Edition feature.

      Couchbase Lite supports replication between two local databases at the database, scope, or collection level. This allows a Couchbase Lite replicator to store data on secondary storage. It is useful in scenarios when a user\u2019s device is damaged and its data is moved to a different device.

      Example 1. Replication between Local Databases

      val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = DatabaseEndpoint(targetDb),\n        collections = mapOf(srcCollections to null),\n        type = ReplicatorType.PUSH\n    )\n)\n\n// Start the replicator\nrepl.start()\n// (be sure to hold a reference somewhere that will prevent it from being GCed)\nthis.replicator = repl\n
      "},{"location":"kermit/","title":"Kermit","text":"

      Kotbase Kermit is a Couchbase Lite custom logger which logs to Kermit. Kermit can direct its logs to any number of log outputs, including the console.

      "},{"location":"kermit/#installation","title":"Installation","text":"Enterprise EditionCommunity Edition build.gradle.kts
      kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee-kermit:3.1.3-1.1.0\")\n        }\n    }\n}\n
      build.gradle.kts
      kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-kermit:3.1.3-1.1.0\")\n        }\n    }\n}\n
      "},{"location":"kermit/#usage","title":"Usage","text":"
      // Disable default console logs and log to Kermit\nDatabase.log.console.level = LogLevel.NONE\nDatabase.log.custom = KermitCouchbaseLiteLogger(kermit)\n
      "},{"location":"kotlin-extensions/","title":"Kotlin Extensions","text":"

      Couchbase Lite \u2014 Kotlin support

      "},{"location":"kotlin-extensions/#introduction","title":"Introduction","text":"

      In addition to implementing the full Couchbase Lite Java SDK API, Kotbase also provides the additional APIs available in the Couchbase Lite Android KTX SDK, which includes a number of Kotlin-specific extensions.

      This includes:

      • Configuration factories for the configuration of important Couchbase Lite objects such as Databases, Replicators, and Listeners.
      • Change Flows that monitor key Couchbase Lite objects for change using Kotlin features such as, coroutines and Flows.

      Additionally, while not available in the Java SDK, as Java doesn't support operator overloading, Kotbase adds support for Fragment subscript APIs, similar to Couchbase Lite Swift, Objective-C, and .NET.

      "},{"location":"kotlin-extensions/#configuration-factories","title":"Configuration Factories","text":"

      Couchbase Lite provides a set of configuration factories. These allow use of named parameters to specify property settings.

      This makes it simple to create variant configurations, by simply overriding named parameters:

      Example of overriding configuration

      val listener8080 = URLEndpointListenerConfigurationFactory.newConfig(\n    networkInterface = \"en0\",\n    port = 8080\n)\nval listener8081 = listener8080.newConfig(port = 8081)\n
      "},{"location":"kotlin-extensions/#database","title":"Database","text":"

      Use DatabaseConfigurationFactory to create a DatabaseConfiguration object, overriding the receiver\u2019s values with the passed parameters.

      In UseDefinition
      val database = Database(\n    \"getting-started\",\n    DatabaseConfigurationFactory.newConfig()\n)\n
      val DatabaseConfigurationFactory: DatabaseConfiguration? = null\n\nfun DatabaseConfiguration?.newConfig(\n    databasePath: String? = null, \n    encryptionKey: EncryptionKey? = null\n): DatabaseConfiguration\n
      "},{"location":"kotlin-extensions/#replication","title":"Replication","text":"

      Use ReplicatorConfigurationFactory to create a ReplicatorConfiguration object, overriding the receiver\u2019s values with the passed parameters.

      In UseDefinition
      val replicator = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        collections = mapOf(db.collections to null),\n        target = URLEndpoint(\"ws://localhost:4984/getting-started-db\"),\n        type = ReplicatorType.PUSH_AND_PULL,\n        authenticator = BasicAuthenticator(\"sync-gateway\", \"password\".toCharArray())\n    )\n)\n
      val ReplicatorConfigurationFactory: ReplicatorConfiguration? = null\n\npublic fun ReplicatorConfiguration?.newConfig(\n    target: Endpoint? = null,\n    collections: Map<out kotlin.collections.Collection<Collection>, CollectionConfiguration?>? = null,\n    type: ReplicatorType? = null,\n    continuous: Boolean? = null,\n    authenticator: Authenticator? = null,\n    headers: Map<String, String>? = null,\n    pinnedServerCertificate: ByteArray? = null,\n    maxAttempts: Int? = null,\n    maxAttemptWaitTime: Int? = null,\n    heartbeat: Int? = null,\n    enableAutoPurge: Boolean? = null,\n    acceptOnlySelfSignedServerCertificate: Boolean? = null,\n    acceptParentDomainCookies: Boolean? = null\n): ReplicatorConfiguration\n
      "},{"location":"kotlin-extensions/#full-text-search","title":"Full Text Search","text":"

      Use FullTextIndexConfigurationFactory to create a FullTextIndexConfiguration object, overriding the receiver\u2019s values with the passed parameters.

      In UseDefinition
      collection.createIndex(\n    \"overviewFTSIndex\",\n    FullTextIndexConfigurationFactory.newConfig(\"overview\")\n)\n
      val FullTextIndexConfigurationFactory: FullTextIndexConfiguration? = null\n\nfun FullTextIndexConfiguration?.newConfig(\n    vararg expressions: String = emptyArray(), \n    language: String? = null, \n    ignoreAccents: Boolean? = null\n): FullTextIndexConfiguration\n
      "},{"location":"kotlin-extensions/#indexing","title":"Indexing","text":"

      Use ValueIndexConfigurationFactory to create a ValueIndexConfiguration object, overriding the receiver\u2019s values with the passed parameters.

      In UseDefinition
      collection.createIndex(\n    \"TypeNameIndex\",\n    ValueIndexConfigurationFactory.newConfig(\"type\", \"name\")\n)\n
      val ValueIndexConfigurationFactory: ValueIndexConfiguration? = null\n\nfun ValueIndexConfiguration?.newConfig(vararg expressions: String = emptyArray()): ValueIndexConfiguration\n
      "},{"location":"kotlin-extensions/#logs","title":"Logs","text":"

      Use LogFileConfigurationFactory to create a LogFileConfiguration object, overriding the receiver\u2019s values with the passed parameters.

      In UseDefinition
      Database.log.file.apply {\n    config = LogFileConfigurationFactory.newConfig(\n        directory = \"path/to/temp/logs\",\n        maxSize = 10240,\n        maxRotateCount = 5,\n        usePlainText = false\n    )\n    level = LogLevel.INFO\n}\n
      val LogFileConfigurationFactory: LogFileConfiguration? = null\n\nfun LogFileConfiguration?.newConfig(\n    directory: String? = null,\n    maxSize: Long? = null,\n    maxRotateCount: Int? = null,\n    usePlainText: Boolean? = null\n): LogFileConfiguration\n
      "},{"location":"kotlin-extensions/#change-flows","title":"Change Flows","text":"

      These wrappers use Flows to monitor for changes.

      "},{"location":"kotlin-extensions/#collection-change-flow","title":"Collection Change Flow","text":"

      Use the Collection.collectionChangeFlow() to monitor collection change events.

      In UseDefinition
      scope.launch {\n    collection.collectionChangeFlow()\n        .map { it.documentIDs }\n        .collect { docIds: List<String> ->\n            // handle changes\n        }\n}\n
      fun Collection.collectionChangeFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<CollectionChange>\n
      "},{"location":"kotlin-extensions/#document-change-flow","title":"Document Change Flow","text":"

      Use Collection.documentChangeFlow() to monitor changes to a document.

      In UseDefinition
      scope.launch {\n    collection.documentChangeFlow(\"1001\")\n        .map { it.collection.getDocument(it.documentID)?.getString(\"lastModified\") }\n        .collect { lastModified: String? ->\n            // handle document changes\n        }\n}\n
      fun Collection.documentChangeFlow(\n    documentId: String, \n    coroutineContext: CoroutineContext? = null\n): Flow<DocumentChange>\n
      "},{"location":"kotlin-extensions/#replicator-change-flow","title":"Replicator Change Flow","text":"

      Use Replicator.replicatorChangeFlow() to monitor replicator changes.

      In UseDefinition
      scope.launch {\n    repl.replicatorChangesFlow()\n        .map { it.status.activityLevel }\n        .collect { activityLevel: ReplicatorActivityLevel ->\n            // handle replicator changes\n        }\n}\n
      fun Replicator.replicatorChangesFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<ReplicatorChange>\n
      "},{"location":"kotlin-extensions/#document-replicator-change-flow","title":"Document Replicator Change Flow","text":"

      Use Replicator.documentReplicationFlow() to monitor document changes during replication.

      In UseDefinition
      scope.launch {\n    repl.documentReplicationFlow()\n        .map { it.documents }\n        .collect { docs: List<ReplicatedDocument> ->\n            // handle replicated documents\n        }\n}\n
      fun Replicator.documentReplicationFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<DocumentReplication>\n
      "},{"location":"kotlin-extensions/#query-change-flow","title":"Query Change Flow","text":"

      Use Query.queryChangeFlow() to monitor changes to a query.

      In UseDefinition
      scope.launch {\n    query.queryChangeFlow()\n        .mapNotNull { change ->\n            val err = change.error\n            if (err != null) {\n                throw err\n            }\n            change.results?.allResults()\n        }\n        .collect { results: List<Result> ->\n            // handle query results\n        }\n}\n
      fun Query.queryChangeFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<QueryChange>\n
      "},{"location":"kotlin-extensions/#fragment-subscripts","title":"Fragment Subscripts","text":"

      Kotbase uses Kotlin's indexed access operator to implement Couchbase Lite's Fragment subscript APIs for Database, Collection, Document, Array, Dictionary, and Result, for concise, type-safe, and null-safe access to arbitrary values in a nested JSON object. MutableDocument, MutableArray, and MutableDictionary also support the MutableFragment APIs for mutating values.

      Supported types can get Fragment or MutableFragment objects by either index or key. Fragment objects represent an arbitrary entry in a key path, themselves supporting subscript access to nested values.

      Finally, the typed optional value at the end of a key path can be accessed or set with the Fragment properties, e.g. array, dictionary, string, int, date, etc.

      Subscript API examples

      val db = Database(\"db\")\nval coll = db.defaultCollection\nval doc = coll[\"doc-id\"]       // DocumentFragment\ndoc.exists                     // true or false\ndoc.document                   // \"doc-id\" Document from Database\ndoc[\"array\"].array             // Array value from \"array\" key\ndoc[\"array\"][0].string         // String value from first Array item\ndoc[\"dict\"].dictionary         // Dictionary value from \"dict\" key\ndoc[\"dict\"][\"num\"].int         // Int value from Dictionary \"num\" key\ncoll[\"milk\"][\"exp\"].date       // Instant value from \"exp\" key from \"milk\" Document\nval newDoc = MutableDocument(\"new-id\")\nnewDoc[\"name\"].value = \"Sally\" // set \"name\" value\n
      "},{"location":"ktx/","title":"KTX","text":"

      The KTX extensions include the excellent Kotlin extensions by MOLO17, as well as other convenience functions for composing queries, observing change Flows, and creating indexes.

      "},{"location":"ktx/#installation","title":"Installation","text":"Enterprise EditionCommunity Edition build.gradle.kts
      kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee-ktx:3.1.3-1.1.0\")\n        }\n    }\n}\n
      build.gradle.kts
      kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ktx:3.1.3-1.1.0\")\n        }\n    }\n}\n
      "},{"location":"ktx/#usage","title":"Usage","text":""},{"location":"ktx/#querybuilder-extensions","title":"QueryBuilder extensions","text":"

      The syntax for building a query is more straight-forward thanks to Kotlin's infix function support.

      select(all()) from collection where { \"type\" equalTo \"user\" }\n

      Or just a bunch of fields:

      select(\"name\", \"surname\") from collection where { \"type\" equalTo \"user\" }\n

      Or if you also want the document ID:

      select(Meta.id, all()) from collection where { \"type\" equalTo \"user\" }\nselect(Meta.id, \"name\", \"surname\") from collection where { \"type\" equalTo \"user\" }\n

      You can even do more powerful querying:

      select(\"name\", \"type\")\n    .from(collection)\n    .where {\n        ((\"type\" equalTo \"user\") and (\"name\" equalTo \"Damian\")) or\n        ((\"type\" equalTo \"pet\") and (\"name\" like \"Kitt\"))\n    }\n    .orderBy { \"name\".ascending() }\n    .limit(10)\n

      There are also convenience extensions for performing SELECT COUNT(*) queries:

      val query = selectCount() from collection where { \"type\" equalTo \"user\" }\nval count = query.execute().countResult()\n
      "},{"location":"ktx/#document-builder-dsl","title":"Document builder DSL","text":"

      For creating a MutableDocument ready to be saved, you can use a Kotlin builder DSL:

      val document = MutableDocument {\n    \"name\" to \"Damian\"\n    \"surname\" to \"Giusti\"\n    \"age\" to 24\n    \"pets\" to listOf(\"Kitty\", \"Kitten\", \"Kitto\")\n    \"type\" to \"user\"\n}\n\ncollection.save(document)\n
      "},{"location":"ktx/#collection-creation-functions","title":"Collection creation functions","text":"

      You can create a MutableArray or MutableDictionary using idiomatic vararg functions:

      mutableArrayOf(\"hello\", 42, true)\nmutableDictOf(\"key1\" to \"value1\", \"key2\" to 2, \"key3\" to null)\n

      The similar mutableDocOf function allows nesting dictionary types, unlike the MutableDocument DSL:

      mutableDocOf(\n    \"string\" to \"hello\",\n    \"number\" to 42,\n    \"array\" to mutableArrayOf(1, 2, 3),\n    \"dict\" to mutableDictOf(\"key\" to \"value\")\n)\n
      "},{"location":"ktx/#flow-support","title":"Flow support","text":"

      Supplementing the Flow APIs from Couchbase Lite Android KTX present in the base couchbase-lite modules, Kotbase KTX adds some additional useful Flow APIs.

      "},{"location":"ktx/#query-flow","title":"Query Flow","text":"

      Query.asFlow() builds on top of Query.queryChangeFlow() to emit non-null ResultSets and throw any QueryChange errors.

      select(all())\n    .from(collection)\n    .where { \"type\" equalTo \"user\" }\n    .asFlow()\n    .collect { value: ResultSet -> \n        // consume ResultSet\n    }\n
      "},{"location":"ktx/#document-flow","title":"Document Flow","text":"

      Unlike Collection.documentChangeFlow(), which only emits DocumentChanges, Collection.documentFlow() handles the common use case of getting the initial document state and observing changes from the collection, enabling reactive UI patterns.

      collection.documentFlow(\"userProfile\")\n    .collect { doc: Document? ->\n        // consume Document\n    }\n
      "},{"location":"ktx/#resultset-model-mapping","title":"ResultSet model mapping","text":""},{"location":"ktx/#map-delegation","title":"Map delegation","text":"

      Thanks to Map delegation, mapping a ResultSet to a Kotlin class has never been so easy.

      The library provides the ResultSet.toObjects() and Query.asObjectsFlow() extensions for helping to map results given a factory lambda.

      Such factory lambdas accept a Map<String, Any?> and return an instance of a certain type. Those requirements fit perfectly with a Map-delegated class.

      class User(map: Map<String, Any?>) {\n    val name: String by map\n    val surname: String by map\n    val age: Int by map\n}\n\nval users: List<User> = query.execute().toObjects(::User)\n\nval usersFlow: Flow<List<User>> = query.asObjectsFlow(::User)\n
      "},{"location":"ktx/#json-deserialization","title":"JSON deserialization","text":"

      Kotbase KTX also provides extensions for mapping documents from a JSON string to Kotlin class. This works well together with a serialization library, like kotlinx-serialization, to decode the JSON string to a Kotlin object.

      @Serializable\nclass User(\n    val name: String,\n    val surname: String,\n    val age: Int\n)\n\nval users: List<User> = query.execute().toObjects { json: String ->\n    Json.decodeFromString<User>(json)\n}\n\nval usersFlow: Flow<List<User>> = query.asObjectsFlow { json: String ->\n    Json.decodeFromString<User>(json)\n}\n
      "},{"location":"ktx/#index-creation","title":"Index creation","text":"

      Kotbase KTX provides concise top-level functions for index creation:

      collection.createIndex(\"typeNameIndex\", valueIndex(\"type\", \"name\"))\ncollection.createIndex(\"overviewFTSIndex\", fullTextIndex(\"overview\"))\n
      "},{"location":"ktx/#replicator-extensions","title":"Replicator extensions","text":"

      For the Android platform, you can bind the Replicator start() and stop() methods to be performed automatically when your Lifecycle-enabled component gets resumed or paused.

      // Binds the Replicator to the Application lifecycle.\nreplicator.bindToLifecycle(ProcessLifecycleOwner.get().lifecycle)\n
      // Binds the Replicator to the Activity/Fragment lifecycle.\n// inside an Activity or Fragment...\noverride fun onCreate(savedInstanceState: Bundle?) {\n    replicator.bindToLifecycle(lifecycle)\n}\n

      That's it! The Replicator will be automatically started when your component passes the ON_RESUME state, and it will be stopped when the component passes the ON_PAUSED state. As you may imagine, no further action will be made after the ON_DESTROY state.

      "},{"location":"license/","title":"License","text":"

      Copyright 2023 Jeff Lockhart

      Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

      "},{"location":"license/#third-party-licenses","title":"Third Party Licenses","text":"
      • AndroidX
      • Couchbase Lite
      • Dokka
      • Kermit
      • KorIO
      • Kotlin
      • kotlinx-atomicfu
      • kotlinx-binary-compatibility-validator
      • kotlinx-coroutines
      • kotlinx-datetime
      • kotlinx-io
      • kotlinx-kover
      • kotlinx-serialization
      • Material for MkDocs
      • Mike
      • MkDocs
      • MkDocs Macros Plugin
      • MockK
      • MOLO17 Couchbase Lite Kotlin
      • Multiplatform Paging
      • Stately
      • vanniktech gradle-maven-publish-plugin
      "},{"location":"live-queries/","title":"Live Queries","text":"

      Couchbase Lite database data querying concepts \u2014 live queries

      "},{"location":"live-queries/#activating-a-live-query","title":"Activating a Live Query","text":"

      A live query is a query that, once activated, remains active and monitors the database for changes; refreshing the result set whenever a change occurs. As such, it is a great way to build reactive user interfaces \u2014 especially table/list views \u2014 that keep themselves up to date.

      So, a simple use case may be: A replicator running and pulling new data from a server, whilst a live-query-driven UI automatically updates to show the data without the user having to manually refresh. This helps your app feel quick and responsive.

      To activate a live query, just add a change listener to the query statement. It will be immediately active. When a change is detected the query automatically runs, and posts the new query result to any observers (change listeners).

      Example 1. Starting a Live Query

      val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection)) \n\n// Adds a query change listener.\n// Changes will be posted on the main queue.\nval token = query.addChangeListener { change ->\n    change.results?.let { rs ->\n        rs.forEach {\n            println(\"results: ${it.keys}\")\n            /* Update UI */\n        }\n    } \n}\n
      1. Build the query statements.
      2. Activate the live query by attaching a listener. Save the token in order to detach the listener and stop the query later \u2014 see Example 2.

      Example 2. Stop a Live Query

      token.remove()\n

      Here we use the change lister token from Example 1 to remove the listener. Doing so stops the live query.

      "},{"location":"live-queries/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

      Kotlin developers also have the option of using Flows to feed query changes to the UI.

      Define a live query as a Flow and activate a collector in the view creation function.

      fun watchQuery(query: Query): Flow<List<Result>> {\n    return query.queryChangeFlow()\n        .mapNotNull { change ->\n            val err = change.error\n            if (err != null) {\n               throw err\n            }\n            change.results?.allResults()\n        }\n}\n
      "},{"location":"n1ql-query-builder-differences/","title":"SQL++ QueryBuilder Differences","text":"

      Differences between Couchbase Lite\u2019s QueryBuilder and SQL++ for Mobile

      Couchbase Lite\u2019s SQL++ for Mobile supports all QueryBuilder features, except Predictive Query and Index. See Table 1 for the features supported by SQL++ but not by QueryBuilder.

      Table 1. QueryBuilder Differences

      Category Components Conditional Operator CASE(WHEN \u2026 THEN \u2026 ELSE \u2026) Array Functions ARRAY_AGG ARRAY_AVG ARRAY_COUNT ARRAY_IFNULL ARRAY_MAX ARRAY_MIN ARRAY_SUM Conditional Functions IFMISSING IFMISSINGORNULL IFNULL MISSINGIF NULLIF Match Functions DIV IDIV ROUND_EVEN Pattern Matching Functions REGEXP_CONTAINS REGEXP_LIKE REGEXP_POSITION REGEXP_REPLACE Type Checking Functions ISARRAY ISATOM ISBOOLEAN ISNUMBER ISOBJECT ISSTRING TYPE Type Conversion Functions TOARRAY TOATOM TOBOOLEAN TONUMBER TOOBJECT TOSTRING"},{"location":"n1ql-query-strings/","title":"SQL++ Query Strings","text":"

      How to use SQL++ query strings to build effective queries with Kotbase

      Note

      The examples used in this topic are based on the Travel Sample app and data introduced in the Couchbase Mobile Workshop tutorial.

      "},{"location":"n1ql-query-strings/#introduction","title":"Introduction","text":"

      Developers using Kotbase can provide SQL++ query strings using the SQL++ Query API. This API uses query statements of the form shown in Example 2.

      The structure and semantics of the query format are based on that of Couchbase Server\u2019s SQL++ query language \u2014 see SQL++ Reference Guide and SQL++ Data Model.

      "},{"location":"n1ql-query-strings/#running","title":"Running","text":"

      The database can create a query object with the SQL++ string. See Query Result Sets for how to work with result sets.

      Example 1. Running a SQL++ Query

      val query = database.createQuery(\n    \"SELECT META().id AS id FROM _ WHERE type = \\\"hotel\\\"\"\n)\nreturn query.execute().use { rs -> rs.allResults() }\n

      We are accessing the current database using the shorthand notation _ \u2014 see the FROM clause for more on data source selection and Query Parameters for more on parameterized queries.

      "},{"location":"n1ql-query-strings/#query-format","title":"Query Format","text":"

      The API uses query statements of the form shown in Example 2.

      Example 2. Query Format

      SELECT ____\nFROM 'data-source'\nWHERE ____,\nJOIN ____\nGROUP BY ____\nORDER BY ____\nLIMIT ____\nOFFSET ____\n

      Query Components

      Component Description SELECT statement The document properties that will be returned in the result set FROM The data source to be queried WHERE statement The query criteriaThe SELECTed properties of documents matching this criteria will be returned in the result set JOIN statement The criteria for joining multiple documents GROUP BY statement The criteria used to group returned items in the result set ORDER BY statement The criteria used to order the items in the result set LIMIT statement The maximum number of results to be returned OFFSET statement The number of results to be skipped before starting to return results

      Tip

      We recommend working through the SQL++ Tutorials to build your SQL++ skills.

      "},{"location":"n1ql-query-strings/#select-statement","title":"SELECT statement","text":""},{"location":"n1ql-query-strings/#purpose","title":"Purpose","text":"

      Projects the result returned by the query, identifying the columns it will contain.

      "},{"location":"n1ql-query-strings/#syntax","title":"Syntax","text":"

      Example 3. SQL++ Select Syntax

      select = SELECT _ ( DISTINCT | ALL )? selectResult\n\nselectResults = selectResult ( _ ',' _ selectResult )*\n\nselectResult = expression ( _ (AS)? columnAlias )?\n\ncolumnAlias = IDENTIFIER\n
      "},{"location":"n1ql-query-strings/#arguments","title":"Arguments","text":"
      1. The select clause begins with the SELECT keyword.
        • The optional ALL argument is used to specify that the query should return ALL results (the default).
        • The optional DISTINCT argument specifies that the query should remove duplicated results.
      2. selectResults is a list of columns projected in the query result. Each column is an expression which could be a property expression or any expressions or functions. You can use the wildcard * to select all columns \u2014 see Select Wildcard.
      3. Use the optional AS argument to provide an alias name for a property. Each property can be aliased by putting the AS <alias name> after the column name.
      "},{"location":"n1ql-query-strings/#select-wildcard","title":"Select Wildcard","text":"

      When using the SELECT * option the column name (key) of the SQL++ string is one of:

      • The alias name if one was specified
      • The data source name (or its alias if provided) as specified in the FROM clause.

      This behavior is inline with that of Couchbase Server SQL++ \u2014 see example in Table 1.

      Table 1. Example Column Names for SELECT *

      Query Column Name SELECT * AS data FROM _ data SELECT * FROM _ _ SELECT * FROM _default _default SELECT * FROM db db SELECT * FROM db AS store store"},{"location":"n1ql-query-strings/#example","title":"Example","text":"

      Example 4. SELECT properties

      SELECT *\n\nSELECT db.* AS data\n\nSELECT name fullName\n\nSELECT db.name fullName\n\nSELECT DISTINCT address.city\n
      1. Use the * wildcard to select all properties.
      2. Select all properties from the db data source. Give the object an alias name of data.
      3. Select a pair of properties.
      4. Select a specific property from the db data source.
      5. Select the property item city from its parent property address.

      See Query Result Sets for more on processing query results.

      "},{"location":"n1ql-query-strings/#from","title":"FROM","text":""},{"location":"n1ql-query-strings/#purpose_1","title":"Purpose","text":"

      Specifies the data source, or sources, and optionally applies an alias (AS). It is mandatory.

      "},{"location":"n1ql-query-strings/#syntax_1","title":"Syntax","text":"
      FROM dataSource\n      (optional JOIN joinClause )\n
      "},{"location":"n1ql-query-strings/#datasource","title":"Datasource","text":"

      A datasource can be:

      • < database-name > : default collection
      • _ (underscore) : default collection
      • < scope-name >.< collection-name > : a collection in a scope
      • < collection-name > : a collection in the default scope
      "},{"location":"n1ql-query-strings/#arguments_1","title":"Arguments","text":"
      1. Here dataSource is the database name against which the query is to run or the .. Use AS to give the database an alias you can use within the query. To use the current database, without specifying a name, use _ as the datasource.
      2. JOIN joinclause \u2014 use this optional argument to link data sources \u2014 see JOIN statement.
      3. "},{"location":"n1ql-query-strings/#example_1","title":"Example","text":"

        Example 5. FROM clause

        SELECT name FROM db\nSELECT name FROM scope.collection\nSELECT store.name FROM db AS store\nSELECT store.name FROM db store\nSELECT name FROM _\nSELECT store.name FROM _ AS store\nSELECT store.name FROM _ store\n
        "},{"location":"n1ql-query-strings/#join-statement","title":"JOIN statement","text":""},{"location":"n1ql-query-strings/#purpose_2","title":"Purpose","text":"

        The JOIN clause enables you to select data from multiple data sources linked by criteria specified in the JOIN statement.

        Currently only self-joins are supported. For example to combine airline details with route details, linked by the airline id \u2014 see Example 6.

        "},{"location":"n1ql-query-strings/#syntax_2","title":"Syntax","text":"
        joinClause = ( join )*\n\njoin = joinOperator _ dataSource _  (constraint)?\n\njoinOperator = ( LEFT (OUTER)? | INNER | CROSS )? JOIN\n\ndataSource = databaseName ( ( AS | _ )? databaseAlias )?\n\nconstraint ( ON expression )?\n
        "},{"location":"n1ql-query-strings/#arguments_2","title":"Arguments","text":"
        1. The join clause starts with a JOIN operator followed by the data source.
        2. Five JOIN operators are supported: JOIN, LEFT JOIN, LEFT OUTER JOIN, INNER JOIN, and CROSS JOIN. Note: JOIN and INNER JOIN are the same, LEFT JOIN and LEFT OUTER JOIN are the same.
        3. The join constraint starts with the ON keyword followed by the expression that defines the joining constraints.
        "},{"location":"n1ql-query-strings/#example_2","title":"Example","text":"
        SELECT db.prop1, other.prop2 FROM db JOIN db AS other ON db.key = other.key\n\nSELECT db.prop1, other.prop2 FROM db LEFT JOIN db other ON db.key = other.key\n\nSELECT * FROM route r JOIN airline a ON r.airlineid = meta(a).id WHERE a.country = \"France\"\n

        Example 6. Using JOIN to Combine Document Details

        This example JOINS the document of type route with documents of type airline using the document ID (_id) on the airline document and airlineid on the route document.

        SELECT * FROM travel-sample r JOIN travel-sample a ON r.airlineid = a.meta.id WHERE a.country = \"France\"\n
        "},{"location":"n1ql-query-strings/#where-statement","title":"WHERE statement","text":""},{"location":"n1ql-query-strings/#purpose_3","title":"Purpose","text":"

        Specifies the selection criteria used to filter results.

        As with SQL, use the WHERE statement to choose which documents are returned by your query.

        "},{"location":"n1ql-query-strings/#syntax_3","title":"Syntax","text":"
        where = WHERE expression\n
        "},{"location":"n1ql-query-strings/#arguments_3","title":"Arguments","text":"

        WHERE evaluates expression to a BOOLEAN value. You can chain any number of expressions in order to implement sophisticated filtering capabilities.

        See also \u2014 Operators for more on building expressions and Query Parameters for more on parameterized queries.

        "},{"location":"n1ql-query-strings/#examples","title":"Examples","text":"
        SELECT name FROM db WHERE department = 'engineer' AND group = 'mobile'\n
        "},{"location":"n1ql-query-strings/#group-by-statement","title":"GROUP BY statement","text":""},{"location":"n1ql-query-strings/#purpose_4","title":"Purpose","text":"

        Use GROUP BY to arrange values in groups of one or more properties.

        "},{"location":"n1ql-query-strings/#syntax_4","title":"Syntax","text":"
        groupBy = grouping _( having )?\n\ngrouping = GROUP BY expression( _ ',' _ expression )*\n\nhaving = HAVING expression\n
        "},{"location":"n1ql-query-strings/#arguments_4","title":"Arguments","text":"
        1. The group by clause starts with the GROUP BY keyword followed by one or more expressions.
        2. grouping \u2014 the group by clause is normally used together with the aggregate functions (e.g. COUNT, MAX, MIN, SUM, AVG).
        3. having \u2014 allows you to filter the result based on aggregate functions \u2014 for example, HAVING count(empnum)>100.
        "},{"location":"n1ql-query-strings/#examples_1","title":"Examples","text":"
        SELECT COUNT(empno), city FROM db GROUP BY city\n\nSELECT COUNT(empno), city FROM db GROUP BY city HAVING COUNT(empno) > 100\n\nSELECT COUNT(empno), city FROM db GROUP BY city HAVING COUNT(empno) > 100 WHERE state = 'CA'\n
        "},{"location":"n1ql-query-strings/#order-by-statement","title":"ORDER BY statement","text":""},{"location":"n1ql-query-strings/#purpose_5","title":"Purpose","text":"

        Sort query results based on a given expression result.

        "},{"location":"n1ql-query-strings/#syntax_5","title":"Syntax","text":"
        orderBy = ORDER BY ordering ( _ ',' _ ordering )*\n\nordering = expression ( _ order )?\n\norder = ( ASC / DESC )\n
        "},{"location":"n1ql-query-strings/#arguments_5","title":"Arguments","text":"
        1. orderBy \u2014 The order by clause starts with the ORDER BY keyword followed by the ordering clause.
        2. ordering \u2014 The ordering clause specifies the properties or expressions to use for ordering the results.
        3. order \u2014 In each ordering clause, the sorting direction is specified using the optional ASC (ascending) or DESC (descending) directives. Default is ASC.
        "},{"location":"n1ql-query-strings/#examples_2","title":"Examples","text":"

        Example 7. Simple usage

        SELECT name FROM db  ORDER BY name\n\nSELECT name FROM db  ORDER BY name DESC\n\nSELECT name, score FROM db  ORDER BY name ASC, score DESC\n
        "},{"location":"n1ql-query-strings/#limit-statement","title":"LIMIT statement","text":""},{"location":"n1ql-query-strings/#purpose_6","title":"Purpose","text":"

        Specifies the maximum number of results to be returned by the query.

        "},{"location":"n1ql-query-strings/#syntax_6","title":"Syntax","text":"
        limit = LIMIT expression\n
        "},{"location":"n1ql-query-strings/#arguments_6","title":"Arguments","text":"

        The limit clause starts with the LIMIT keyword followed by an expression that will be evaluated as a number.

        "},{"location":"n1ql-query-strings/#examples_3","title":"Examples","text":"

        Example 8. Simple usage

        SELECT name FROM db LIMIT 10\n

        Return only 10 results

        "},{"location":"n1ql-query-strings/#offset-statement","title":"OFFSET statement","text":""},{"location":"n1ql-query-strings/#purpose_7","title":"Purpose","text":"

        Specifies the number of results to be skipped by the query.

        "},{"location":"n1ql-query-strings/#syntax_7","title":"Syntax","text":"
        offset = OFFSET expression\n
        "},{"location":"n1ql-query-strings/#arguments_7","title":"Arguments","text":"

        The offset clause starts with the OFFSET keyword followed by an expression that will be evaluated as a number that represents the number of results ignored before the query begins returning results.

        "},{"location":"n1ql-query-strings/#examples_4","title":"Examples","text":"

        Example 9. Simple usage

        SELECT name FROM db OFFSET 10\n\nSELECT name FROM db  LIMIT 10 OFFSET 10\n
        1. Ignore first 10 results
        2. Ignore first 10 results then return the next 10 results
        "},{"location":"n1ql-query-strings/#expressions","title":"Expressions","text":"

        In this section Literals | Identifiers | Property Expressions | Any and Every Expressions | Parameter Expressions | Parenthesis Expressions

        Expressions are references to identifiers that resolve to values. Categories of expression comprise the elements covered in this section (see above), together with Operators and Functions, which are covered in their own sections.

        "},{"location":"n1ql-query-strings/#literals","title":"Literals","text":"

        Boolean | Numeric | String | NULL | MISSING | Array | Dictionary

        "},{"location":"n1ql-query-strings/#boolean","title":"Boolean","text":""},{"location":"n1ql-query-strings/#purpose_8","title":"Purpose","text":"

        Represents a true or false value.

        "},{"location":"n1ql-query-strings/#syntax_8","title":"Syntax","text":"

        TRUE | FALSE

        "},{"location":"n1ql-query-strings/#example_3","title":"Example","text":"
        SELECT value FROM db  WHERE value = true\nSELECT value FROM db  WHERE value = false\n
        "},{"location":"n1ql-query-strings/#numeric","title":"Numeric","text":""},{"location":"n1ql-query-strings/#purpose_9","title":"Purpose","text":"

        Represents a numeric value. Numbers may be signed or unsigned digits. They have optional fractional and exponent components.

        "},{"location":"n1ql-query-strings/#syntax_9","title":"Syntax","text":"
        '-'? (('.' DIGIT+) | (DIGIT+ ('.' DIGIT*)?)) ( [Ee] [-+]? DIGIT+ )? WB\n\nDIGIT = [0-9]\n
        "},{"location":"n1ql-query-strings/#example_4","title":"Example","text":"
        SELECT value FROM db  WHERE value = 10\nSELECT value FROM db  WHERE value = 0\nSELECT value FROM db WHERE value = -10\nSELECT value FROM db WHERE value = 10.25\nSELECT value FROM db WHERE value = 10.25e2\nSELECT value FROM db WHERE value = 10.25E2\nSELECT value FROM db WHERE value = 10.25E+2\nSELECT value FROM db WHERE value = 10.25E-2\n
        "},{"location":"n1ql-query-strings/#string","title":"String","text":""},{"location":"n1ql-query-strings/#purpose_10","title":"Purpose","text":"

        The string literal represents a string or sequence of characters.

        "},{"location":"n1ql-query-strings/#syntax_10","title":"Syntax","text":"
        \"characters\" | 'characters'\n

        The string literal can be double-quoted as well as single-quoted.

        "},{"location":"n1ql-query-strings/#example_5","title":"Example","text":"
        SELECT firstName, lastName FROM db WHERE middleName = \"middle\"\nSELECT firstName, lastName FROM db WHERE middleName = 'middle'\n
        "},{"location":"n1ql-query-strings/#null","title":"NULL","text":""},{"location":"n1ql-query-strings/#purpose_11","title":"Purpose","text":"

        The literal NULL represents an empty value.

        "},{"location":"n1ql-query-strings/#syntax_11","title":"Syntax","text":"
        NULL\n
        "},{"location":"n1ql-query-strings/#example_6","title":"Example","text":"
        SELECT firstName, lastName FROM db WHERE middleName IS NULL\n
        "},{"location":"n1ql-query-strings/#missing","title":"MISSING","text":""},{"location":"n1ql-query-strings/#purpose_12","title":"Purpose","text":"

        The MISSING literal represents a missing name-value pair in a document.

        "},{"location":"n1ql-query-strings/#syntax_12","title":"Syntax","text":"
        MISSING\n
        "},{"location":"n1ql-query-strings/#example_7","title":"Example","text":"
        SELECT firstName, lastName FROM db WHERE middleName IS MISSING\n
        "},{"location":"n1ql-query-strings/#array","title":"Array","text":""},{"location":"n1ql-query-strings/#purpose_13","title":"Purpose","text":"

        Represents an Array.

        "},{"location":"n1ql-query-strings/#syntax_13","title":"Syntax","text":"
        arrayLiteral = '[' _ (expression ( _ ',' _ e2:expression )* )? ']'\n
        "},{"location":"n1ql-query-strings/#example_8","title":"Example","text":"
        SELECT [\"a\", \"b\", \"c\"] FROM _\nSELECT [ property1, property2, property3] FROM _\n
        "},{"location":"n1ql-query-strings/#dictionary","title":"Dictionary","text":""},{"location":"n1ql-query-strings/#purpose_14","title":"Purpose","text":"

        Represents a dictionary literal.

        "},{"location":"n1ql-query-strings/#syntax_14","title":"Syntax","text":"
        dictionaryLiteral = '{' _ ( STRING_LITERAL ':' e:expression\n  ( _ ',' _ STRING_LITERAL ':' _ expression )* )?\n   '}'\n
        "},{"location":"n1ql-query-strings/#example_9","title":"Example","text":"
        SELECT { 'name': 'James', 'department': 10 } FROM db\nSELECT { 'name': 'James', 'department': dept } FROM db\nSELECT { 'name': 'James', 'phones': ['650-100-1000', '650-100-2000'] } FROM db\n
        "},{"location":"n1ql-query-strings/#identifiers","title":"Identifiers","text":""},{"location":"n1ql-query-strings/#purpose_15","title":"Purpose","text":"

        Identifiers provide symbolic references. Use them for example to identify: column alias names, database names, database alias names, property names, parameter names, function names, and FTS index names.

        "},{"location":"n1ql-query-strings/#syntax_15","title":"Syntax","text":"
        <[a-zA-Z_] [a-zA-Z0-9_$]*> _ | \"`\" ( [^`] | \"``\"   )* \"`\"  _\n

        The identifier allows a-z, A-Z, 0-9, _ (underscore), and $ character. The identifier is case-sensitive.

        Tip

        To use other characters in the identifier, surround the identifier with the backtick ` character.

        "},{"location":"n1ql-query-strings/#example_10","title":"Example","text":"

        Example 10. Identifiers

        SELECT * FROM _\n\nSELECT * FROM `db-1`\n\nSELECT key FROM db\n\nSELECT key$1 FROM db_1\n\nSELECT `key-1` FROM db\n

        Use of backticks allows a hyphen as part of the identifier name.

        "},{"location":"n1ql-query-strings/#property-expressions","title":"Property Expressions","text":""},{"location":"n1ql-query-strings/#purpose_16","title":"Purpose","text":"

        The property expression is used to reference a property in a document.

        "},{"location":"n1ql-query-strings/#syntax_16","title":"Syntax","text":"
        property = '*'| dataSourceName '.' _ '*'  | propertyPath\n\npropertyPath = propertyName (\n    ('.' _ propertyName ) |\n    ('[' _ INT_LITERAL _ ']' _  )\n    )*\n\npropertyName = IDENTIFIER\n
        1. Prefix the property expression with the data source name or alias to indicate its origin.
        2. Use dot syntax to refer to nested properties in the propertyPath.
        3. Use bracket ([index]) syntax to refer to an item in an array.
        4. Use the asterisk (*) character to represents all properties. This can only be used in the result list of the SELECT clause.
        "},{"location":"n1ql-query-strings/#example_11","title":"Example","text":"

        Example 11. Property Expressions

        SELECT *\n  FROM db\n  WHERE contact.name = \"daniel\"\n\nSELECT db.*\n  FROM db\n  WHERE collection.contact.name = \"daniel\"\n\nSELECT collection.contact.address.city\n  FROM scope.collection\n  WHERE collection.contact.name = \"daniel\"\n\nSELECT contact.address.city\n  FROM scope.collection\n  WHERE contact.name = \"daniel\"\n\nSELECT contact.address.city, contact.phones[0]\n  FROM db\n  WHERE contact.name = \"daniel\"\n
        "},{"location":"n1ql-query-strings/#any-and-every-expressions","title":"Any and Every Expressions","text":""},{"location":"n1ql-query-strings/#purpose_17","title":"Purpose","text":"

        Evaluates expressions over items in an array object.

        "},{"location":"n1ql-query-strings/#syntax_17","title":"Syntax","text":"
        arrayExpression = \n  anyEvery _ variableName \n     _ IN  _ expression \n       _ SATISFIES _ expression \n    END \n\nanyEvery = anyOrSome AND EVERY | anyOrSome | EVERY\n\nanyOrSome = ANY | SOME\n
        1. The array expression starts with ANY/SOME, EVERY, or ANY/SOME AND EVERY, each of which has a different function as described below, and is terminated by END
          • ANY/SOME: Returns TRUE if at least one item in the array satisfies the expression, otherwise returns FALSE. NOTE: ANY and SOME are interchangeable.
          • EVERY: Returns TRUE if all items in the array satisfies the expression, otherwise return FALSE. If the array is empty, returns TRUE.
          • ANY/SOME AND EVERY: Same as EVERY but returns FALSE if the array is empty.
        2. The variable name represents each item in the array.
        3. The IN keyword is used for specifying the array to be evaluated.
        4. The SATISFIES keyword is used for evaluating each item in the array.
        5. END terminates the array expression.
        "},{"location":"n1ql-query-strings/#example_12","title":"Example","text":"

        Example 12. ALL and Every Expressions

        SELECT name\n  FROM db\n  WHERE ANY v\n          IN contacts\n          SATISFIES v.city = 'San Mateo'\n        END\n
        "},{"location":"n1ql-query-strings/#parameter-expressions","title":"Parameter Expressions","text":""},{"location":"n1ql-query-strings/#purpose_18","title":"Purpose","text":"

        Parameter expressions specify a value to be assigned from the parameter map presented when executing the query.

        Note

        If parameters are specified in the query string, but the parameter and value mapping is not specified in the query object, an error will be thrown when executing the query.

        "},{"location":"n1ql-query-strings/#syntax_18","title":"Syntax","text":"
        $IDENTIFIER\n
        "},{"location":"n1ql-query-strings/#examples_5","title":"Examples","text":"

        Example 13. Parameter Expression

        SELECT name\n  FROM db\n  WHERE department = $department\n

        Example 14. Using a Parameter

        val query = database.createQuery(\"SELECT name WHERE department = \\$department\")\nquery.parameters = Parameters().setValue(\"department\", \"E001\")\nval result = query.execute()\n

        The query resolves to SELECT name WHERE department = \"E001\"

        "},{"location":"n1ql-query-strings/#parenthesis-expressions","title":"Parenthesis Expressions","text":""},{"location":"n1ql-query-strings/#purpose_19","title":"Purpose","text":"

        Use parentheses to group expressions together to make them more readable or to establish operator precedences.

        "},{"location":"n1ql-query-strings/#example_13","title":"Example","text":"

        Example 15. Parenthesis Expression

        -- Establish the desired operator precedence; do the addition before the multiplication\nSELECT (value1 + value2) * value 3\n  FROM db\n\nSELECT *\n  FROM db\n  WHERE ((value1 + value2) * value3) + value4 = 10\n\nSELECT *\n  FROM db\n  -- Clarify the conditional grouping\n  WHERE (value1 = value2)\n     OR (value3 = value4)\n
        "},{"location":"n1ql-query-strings/#operators","title":"Operators","text":"

        In this section Binary Operators | Unary Operators | COLLATE Operators | CONDITIONAL Operator

        "},{"location":"n1ql-query-strings/#binary-operators","title":"Binary Operators","text":"

        Maths | Comparison Operators | Logical Operators | String Operator

        "},{"location":"n1ql-query-strings/#maths","title":"Maths","text":"

        Table 2. Maths Operators

        Op Desc Example + Add WHERE v1 + v2 = 10 - Subtract WHERE v1 - v2 = 10 * Multiply WHERE v1 * v2 = 10 / Divide \u2014 see note \u00b9 WHERE v1 / v2 = 10 % Modulo WHERE v1 % v2 = 0

        \u00b9 If both operands are integers, integer division is used, but if one is a floating number, then float division is used. This differs from Server SQL++, which performs float division regardless. Use DIV(x, y) to force float division in CBL SQL++.

        "},{"location":"n1ql-query-strings/#comparison-operators","title":"Comparison Operators","text":""},{"location":"n1ql-query-strings/#purpose_20","title":"Purpose","text":"

        The comparison operators are used in the WHERE statement to specify the condition on which to match documents.

        Table 3. Comparison Operators

        Op Desc Example = or == Equals WHERE v1 = v2WHERE v1 == v2 != or <> Not Equal to WHERE v1 != v2WHERE v1 <> v2 > Greater than WHERE v1 > v2 >= Greater than or equal to WHERE v1 >= v2 > Less than WHERE v1 < v2 >= Less than or equal to WHERE v1 \u21d0 v2 IN Returns TRUE if the value is in the list or array of values specified by the right hand side expression; Otherwise returns FALSE. WHERE \"James\" IN contactsList LIKE String wildcard pattern matching \u00b2 comparison. Two wildcards are supported:
        • % Matches zero or more characters.
        • _ Matches a single character.
        WHERE name LIKE 'a%'WHERE name LIKE '%a'WHERE name LIKE '%or%'WHERE name LIKE 'a%o%'WHERE name LIKE '%_r%'WHERE name LIKE '%a_%'WHERE name LIKE '%a__%'WHERE name LIKE 'aldo' MATCH String matching using FTS see Full Text Search Functions WHERE v1-index MATCH \"value\" BETWEEN Logically equivalent to v1>=X and v1<=Y WHERE v1 BETWEEN 10 and 100 IS NULL \u00b3 Equal to NULL WHERE v1 IS NULL IS NOT NULL Not equal to NULL WHERE v1 IS NOT NULL IS MISSING Equal to MISSING WHERE v1 IS MISSING IS NOT MISSING Not equal to MISSING WHERE v1 IS NOT MISSING IS VALUED IS NOT NULL AND MISSING WHERE v1 IS VALUED IS NOT VALUED IS NULL OR MISSING WHERE v1 IS NOT VALUED

        \u00b2 Matching is case-insensitive for ASCII characters, case-sensitive for non-ASCII.

        \u00b3 Use of IS and IS NOT is limited to comparing NULL and MISSING values (this encompasses VALUED). This is different from QueryBuilder, in which they operate as equivalents of == and !=.

        Table 4. Comparing NULL and MISSING values using IS

        OP NON-NULL Value NULL MISSING IS NULL FALSE TRUE MISSING IS NOT NULL TRUE FALSE MISSING IS MISSING FALSE FALSE TRUE IS NOT MISSING TRUE TRUE FALSE IS VALUED TRUE FALSE FALSE IS NOT VALUED FALSE TRUE TRUE"},{"location":"n1ql-query-strings/#logical-operators","title":"Logical Operators","text":""},{"location":"n1ql-query-strings/#purpose_21","title":"Purpose","text":"

        Logical operators combine expressions using the following Boolean Logic Rules:

        • TRUE is TRUE, and FALSE is FALSE
        • Numbers 0 or 0.0 are FALSE
        • Arrays and dictionaries are FALSE
        • String and Blob are TRUE if the values are casted as a non-zero or FALSE if the values are casted as 0 or 0.0
        • NULL is FALSE
        • MISSING is MISSING

        Note

        This is different from Server SQL++, where:

        • MISSING, NULL and FALSE are FALSE
        • Numbers 0 is FALSE
        • Empty strings, arrays, and objects are FALSE
        • All other values are TRUE

        Tip

        Use TOBOOLEAN(expr) function to convert a value based on Server SQL++ boolean value rules.

        Table 5. Logical Operators

        Op Description Example AND Returns TRUE if the operand expressions evaluate to TRUE; otherwise FALSE.If an operand is MISSING and the other is TRUE returns MISSING, if the other operand is FALSE it returns FALSE.If an operand is NULL and the other is TRUE returns NULL, if the other operand is FALSE it returns FALSE. WHERE city = \"San Francisco\" AND status = true OR Returns TRUE if one of the operand expressions is evaluated to TRUE; otherwise returns FALSE.If an operand is MISSING, the operation will result in MISSING if the other operand is FALSE or TRUE if the other operand is TRUE.If an operand is NULL, the operation will result in NULL if the other operand is FALSE or TRUE if the other operand is TRUE. WHERE city = \u201cSan Francisco\u201d OR city = \"Santa Clara\"

        Table 6. Logical Operation Table

        a b a AND b a OR b TRUE TRUE TRUE TRUE FALSE FALSE TRUE NULL FALSE \u2075\u207b\u00b9 TRUE MISSING MISSING TRUE FALSE TRUE FALSE TRUE FALSE FALSE FALSE NULL FALSE FALSE \u2075\u207b\u00b9 MISSING FALSE MISSING NULL TRUE FALSE \u2075\u207b\u00b9 TRUE FALSE FALSE FALSE \u2075\u207b\u00b9 NULL FALSE \u2075\u207b\u00b9 FALSE \u2075\u207b\u00b9 MISSING FALSE \u2075\u207b\u00b2 MISSING \u2075\u207b\u00b3 MISSING TRUE MISSING TRUE FALSE FALSE MISSING NULL FALSE \u2075\u207b\u00b2 MISSING \u2075\u207b\u00b3 MISSING MISSING MISSING

        Note

        This differs from Server SQL++ in the following instances: \u2075\u207b\u00b9 Server will return: NULL instead of FALSE \u2075\u207b\u00b2 Server will return: MISSING instead of FALSE \u2075\u207b\u00b3 Server will return: NULL instead of MISSING

        "},{"location":"n1ql-query-strings/#string-operator","title":"String Operator","text":""},{"location":"n1ql-query-strings/#purpose_22","title":"Purpose","text":"

        A single string operator is provided. It enables string concatenation.

        Table 7. String Operators

        Op Description Example || Concatenating SELECT firstnm || lastnm AS fullname FROM db"},{"location":"n1ql-query-strings/#unary-operators","title":"Unary Operators","text":""},{"location":"n1ql-query-strings/#purpose_23","title":"Purpose","text":"

        Three unary operators are provided. They operate by modifying an expression, making it numerically positive or negative, or by logically negating its value (TRUE becomes FALSE).

        "},{"location":"n1ql-query-strings/#syntax_19","title":"Syntax","text":"
        // UNARY_OP _ expr\n

        Table 8. Unary Operators

        Op Description Example + Positive value WHERE v1 = +10 - Negative value WHERE v1 = -10 NOT Logical Negate operator * WHERE \"James\" NOT IN contactsList

        * The NOT operator is often used in conjunction with operators such as IN, LIKE, MATCH, and BETWEEN operators. NOT operation on NULL value returns NULL. NOT operation on MISSING value returns MISSING.

        Table 9. NOT Operation TABLE

        a NOT a TRUE FALSE FALSE TRUE NULL FALSE MISSING MISSING"},{"location":"n1ql-query-strings/#collate-operators","title":"COLLATE Operators","text":""},{"location":"n1ql-query-strings/#purpose_24","title":"Purpose","text":"

        Collate operators specify how the string comparison is conducted.

        "},{"location":"n1ql-query-strings/#usage","title":"Usage","text":"

        The collate operator is used in conjunction with string comparison expressions and ORDER BY clauses. It allows for one or more collations.

        If multiple collations are used, the collations need to be specified in a parenthesis. When only one collation is used, the parenthesis is optional.

        Note

        Collate is not supported by Server SQL++

        "},{"location":"n1ql-query-strings/#syntax_20","title":"Syntax","text":"
        collate = COLLATE collation | '(' collation (_ collation )* ')'\n\ncollation = NO? (UNICODE | CASE | DIACRITICS) WB\n
        "},{"location":"n1ql-query-strings/#arguments_8","title":"Arguments","text":"

        The available collation options are:

        • UNICODE: Conduct a Unicode comparison; the default is to do ASCII comparison.
        • CASE: Conduct case-sensitive comparison.
        • DIACRITIC: Take account of accents and diacritics in the comparison; on by default.
        • NO: This can be used as a prefix to the other collations, to disable them (for example: NOCASE to enable case-insensitive comparison)
        "},{"location":"n1ql-query-strings/#example_14","title":"Example","text":"
        SELECT department FROM db WHERE (name = \"fred\") COLLATE UNICODE\n
        SELECT department FROM db WHERE (name = \"fred\")\nCOLLATE (UNICODE)\n
        SELECT department FROM db WHERE (name = \"fred\") COLLATE (UNICODE CASE)\n
        SELECT name FROM db ORDER BY name COLLATE (UNICODE DIACRITIC)\n
        "},{"location":"n1ql-query-strings/#conditional-operator","title":"CONDITIONAL Operator","text":""},{"location":"n1ql-query-strings/#purpose_25","title":"Purpose","text":"

        The Conditional (or CASE) operator evaluates conditional logic in a similar way to the IF/ELSE operator.

        "},{"location":"n1ql-query-strings/#syntax_21","title":"Syntax","text":"
        CASE (expression) (WHEN expression THEN expression)+ (ELSE expression)? END\n\nCASE (expression)? (!WHEN expression)?\n  (WHEN expression THEN expression)+ (ELSE expression)? END\n

        Both Simple Case and Searched Case expressions are supported. The syntactic difference being that the Simple Case expression has an expression after the CASE keyword.

        1. Simple Case Expression
          • If the CASE expression is equal to the first WHEN expression, the result is the THEN expression.
          • Otherwise, any subsequent WHEN clauses are evaluated in the same way.
          • If no match is found, the result of the CASE expression is the ELSE expression, NULL if no ELSE expression was provided.
        2. Searched Case Expression
          • If the first WHEN expression is TRUE, the result of this expression is its THEN expression.
          • Otherwise, subsequent WHEN clauses are evaluated in the same way. If no WHEN clause evaluate to TRUE, then the result of the expression is the ELSE expression, or NULL if no ELSE expression was provided.
        "},{"location":"n1ql-query-strings/#example_15","title":"Example","text":"

        Example 16. Simple Case

        SELECT CASE state WHEN \u2018CA\u2019 THEN \u2018Local\u2019 ELSE \u2018Non-Local\u2019 END FROM DB\n

        Example 17. Searched Case

        SELECT CASE WHEN shippedOn IS NOT NULL THEN \u2018SHIPPED\u2019 ELSE \"NOT-SHIPPED\" END FROM db\n
        "},{"location":"n1ql-query-strings/#functions","title":"Functions","text":"

        In this section Aggregation Functions | Array Functions | Conditional Functions | Date and Time Functions | Full Text Search Functions | Maths Functions | Metadata Functions | Pattern Searching Functions | String Functions | Type Checking Functions | Type Conversion Functions

        "},{"location":"n1ql-query-strings/#purpose_26","title":"Purpose","text":"

        Functions are also expressions.

        "},{"location":"n1ql-query-strings/#syntax_22","title":"Syntax","text":"

        The function syntax is the same as Java\u2019s method syntax. It starts with the function name, followed by optional arguments inside parentheses.

        function = functionName parenExprs\n\nfunctionName  = IDENTIFIER\n\nparenExprs = '(' ( expression (_ ',' _ expression )* )? ')'\n
        "},{"location":"n1ql-query-strings/#aggregation-functions","title":"Aggregation Functions","text":"

        Table 10. Aggregation Functions

        Function Description AVG(expr) Returns average value of the number values in the group COUNT(expr) Returns a count of all values in the group MIN(expr) Returns the minimum value in the group MAX(expr) Returns the maximum value in the group SUM(expr) Returns the sum of all number values in the group"},{"location":"n1ql-query-strings/#array-functions","title":"Array Functions","text":"

        Table 11. Array Functions

        Function Description ARRAY_AGG(expr) Returns an array of the non-MISSING group values in the input expression, including NULL values. ARRAY_AVG(expr) Returns the average of all non-NULL number values in the array; or NULL if there are none ARRAY_CONTAINS(expr) Returns TRUE if the value exists in the array; otherwise FALSE ARRAY_COUNT(expr) Returns the number of non-null values in the array ARRAY_IFNULL(expr) Returns the first non-null value in the array ARRAY_MAX(expr) Returns the largest non-NULL, non_MISSING value in the array ARRAY_MIN(expr) Returns the smallest non-NULL, non_MISSING value in the array ARRAY_LENGTH(expr) Returns the length of the array ARRAY_SUM(expr) Returns the sum of all non-NULL numeric value in the array"},{"location":"n1ql-query-strings/#conditional-functions","title":"Conditional Functions","text":"

        Table 12. Conditional Functions

        Function Description IFMISSING(expr1, expr2, \u2026) Returns the first non-MISSING value, or NULL if all values are MISSING IFMISSINGRONULL(expr1, expr2, \u2026) Returns the first non-NULL and non-MISSING value, or NULL if all values are NULL or MISSING IFNULL(expr1, expr2, \u2026) Returns the first non-NULL, or NULL if all values are NULL MISSINGIF(expr1, expr2) Returns MISSING when expr1 = expr2; otherwise returns expr1.Returns MISSING if either or both expressions are MISSING.Returns NULL if either or both expressions are NULL.+ NULLF(expr1, expr2) Returns NULL when expr1 = expr2; otherwise returns expr1.Returns MISSING if either or both expressions are MISSING.Returns NULL if either or both expressions are NULL.+"},{"location":"n1ql-query-strings/#date-and-time-functions","title":"Date and Time Functions","text":"

        Table 13. Date and Time Functions

        Function Description STR_TO_MILLIS(expr) Returns the number of milliseconds since the unix epoch of the given ISO 8601 date input string. STR_TO_UTC(expr) Returns the ISO 8601 UTC date time string of the given ISO 8601 date input string. MILLIS_TO_STR(expr) Returns a ISO 8601 date time string in device local timezone of the given number of milliseconds since the unix epoch expression. MILLIS_TO_UTC(expr) Returns the UTC ISO 8601 date time string of the given number of milliseconds since the unix epoch expression."},{"location":"n1ql-query-strings/#full-text-search-functions","title":"Full Text Search Functions","text":"

        Table 14. FTS Functions

        Function Description Example MATCH(indexName, term) Returns TRUE if term expression matches the FTS indexed term. indexName identifies the FTS index, term expression to search for matching. WHERE MATCH (description, \u201ccouchbase\u201d) RANK(indexName) Returns a numeric value indicating how well the current query result matches the full-text query when performing the MATCH. indexName is an IDENTIFIER for the FTS index. WHERE MATCH (description, \u201ccouchbase\u201d) ORDER BY RANK(description)"},{"location":"n1ql-query-strings/#maths-functions","title":"Maths Functions","text":"

        Table 15. Maths Functions

        Function Description ABS(expr) Returns the absolute value of a number. ACOS(expr) Returns the arc cosine in radians. ASIN(expr) Returns the arcsine in radians. ATAN(expr) Returns the arctangent in radians. ATAN2(expr1,expr2) Returns the arctangent of expr1/expr2. CEIL(expr) Returns the smallest integer not less than the number. COS(expr) Returns the cosine value of the expression. DIV(expr1, expr2) Returns float division of expr1 and expr2.Both expr1 and expr2 are cast to a double number before division.The returned result is always a double. DEGREES(expr) Converts radians to degrees. E() Returns base of natural logarithms. EXP(expr) Returns expr value FLOOR(expr) Returns largest integer not greater than the number. IDIV(expr1, expr2) Returns integer division of expr1 and expr2. LN(expr) Returns log base e value. LOG(expr) Returns log base 10 value. PI() Return PI value. POWER(expr1, expr2) Returns expr1expr2 value. RADIANS(expr) Returns degrees to radians. ROUND(expr (, digits_expr)?) Returns the rounded value to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits are 0 if not given.The function uses Rounding Away From Zero convention to round midpoint values to the next number away from zero (so, for example, ROUND(1.75) returns 1.8 but ROUND(1.85) returns 1.9. * ROUND_EVEN(expr (, digits_expr)?) Returns rounded value to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits are 0 if not given.The function uses Rounding to Nearest Even (Banker\u2019s Rounding) convention which rounds midpoint values to the nearest even number (for example, both ROUND_EVEN(1.75) and ROUND_EVEN(1.85) return 1.8). SIGN(expr) Returns -1 for negative, 0 for zero, and 1 for positive numbers. SIN(expr) Returns sine value. SQRT(expr) Returns square root value. TAN(expr) Returns tangent value. TRUNC (expr (, digits, expr)?) Returns a truncated number to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits are 0 if not given.

        * The behavior of the ROUND() function is different from Server SQL++ ROUND(), which rounds the midpoint values using Rounding to Nearest Even convention.

        "},{"location":"n1ql-query-strings/#metadata-functions","title":"Metadata Functions","text":"

        Table 16. Metadata Functions

        Function Description Example META(dataSourceName?) Returns a dictionary containing metadata properties including:
        • id : document identifier
        • sequence : document mutating sequence number
        • deleted : flag indicating whether document is deleted or not
        • expiration : document expiration date in timestamp formatThe optional dataSourceName identifies the database or the database alias name.
        To access a specific metadata property, use the dot expression. SELECT META() FROM dbSELECT META().id, META().sequence, META().deleted, META().expiration FROM dbSELECT p.name, r.rating FROM product as p INNER JOIN reviews AS r ON META(r).id IN p.reviewList WHERE META(p).id = \"product320\""},{"location":"n1ql-query-strings/#pattern-searching-functions","title":"Pattern Searching Functions","text":"

        Table 17. Pattern Searching Functions

        Function Description REGEXP_CONTAINS(expr, pattern) Returns TRUE if the string value contains any sequence that matches the regular expression pattern. REGEXP_LIKE(expr, pattern) Return TRUE if the string value exactly matches the regular expression pattern. REGEXP_POSITION(expr, pattern) Returns the first position of the occurrence of the regular expression pattern within the input string expression. Return -1 if no match is found. Position counting starts from zero. REGEXP_REPLACE(expr, pattern, repl [, n]) Returns new string with occurrences of pattern replaced with repl. If n is given, at the most n replacements are performed. If n is not given, all matching occurrences are replaced."},{"location":"n1ql-query-strings/#string-functions","title":"String Functions","text":"

        Table 18. String Functions

        Function Description CONTAINS(expr, substring_expr) Returns true if the substring exists within the input string, otherwise returns false. LENGTH(expr) Returns the length of a string. The length is defined as the number of characters within the string. LOWER(expr) Returns the lowercase string of the input string. LTRIM(expr) Returns the string with all leading whitespace characters removed. RTRIM(expr) Returns the string with all trailing whitespace characters removed. TRIM(expr) Returns the string with all leading and trailing whitespace characters removed. UPPER(expr) Returns the uppercase string of the input string."},{"location":"n1ql-query-strings/#type-checking-functions","title":"Type Checking Functions","text":"

        Table 19. Type Checking Functions

        Function Description ISARRAY(expr) Returns TRUE if expression is an array, otherwise returns MISSING, NULL or FALSE. ISATOM(expr) Returns TRUE if expression is a Boolean, number, or string, otherwise returns MISSING, NULL or FALSE. ISBOOLEAN(expr) Returns TRUE if expression is a Boolean, otherwise returns MISSING, NULL or FALSE. ISNUMBER(expr) Returns TRUE if expression is a number, otherwise returns MISSING, NULL or FALSE. ISOBJECT(expr) Returns TRUE if expression is an object (dictionary), otherwise returns MISSING, NULL or FALSE. ISSTRING(expr) Returns TRUE if expression is a string, otherwise returns MISSING, NULL or FALSE. TYPE(expr) Returns one of the following strings, based on the value of expression:
        • \u201cmissing\u201d
        • \u201cnull\u201d
        • \u201cboolean\u201d
        • \u201cnumber\u201d
        • \u201cstring\u201d
        • \u201carray\u201d
        • \u201cobject\u201d
        • \u201cbinary\u201d
        "},{"location":"n1ql-query-strings/#type-conversion-functions","title":"Type Conversion Functions","text":"

        Table 20. Type Conversion Functions

        Function Description TOARRAY(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns the array itself.Returns all other values wrapped in an array. TOATOM(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns an array of a single item if the value is an array.Returns an object of a single key/value pair if the value is an object.Returns boolean, numbers, or stringsReturns NULL for all other values. TOBOOLEAN(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns FALSE if the value is FALSE.Returns FALSE if the value is 0 or NaN.Returns FALSE if the value is an empty string, array, and object.Return TRUE for all other values. TONUMBER(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns 0 if the value is FALSE.Returns 1 if the value is TRUE.Returns NUMBER if the value is NUMBER.Returns NUMBER parsed from the string value.Returns NULL for all other values. TOOBJECT(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns the object if the value is an object.Returns an empty object for all other values. TOSTRING(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns \u201cfalse\u201d if the value is FALSE.Returns \u201ctrue\u201d if the value is TRUE.Returns NUMBER in String if the value is NUMBER.Returns the string value if the value is a string.Returns NULL for all other values."},{"location":"n1ql-query-strings/#querybuilder-differences","title":"QueryBuilder Differences","text":"

        Couchbase Lite SQL++ Query supports all QueryBuilder features, except Predictive Query and Index. See Table 21 for the features supported by SQL++ but not by QueryBuilder.

        Table 21. QueryBuilder Differences

        Category Components Conditional Operator CASE(WHEN \u2026 THEN \u2026 ELSE ..) Array Functions ARRAY_AGGARRAY_AVGARRAY_COUNTARRAY_IFNULLARRAY_MAXARRAY_MINARRAY_SUM Conditional Functions IFMISSINGIFMISSINGORNULLIFNULLMISSINGIFNULLIF Math Functions DIVIDIVROUND_EVEN Pattern Matching Functions REGEXP_CONTAINSREGEXP_LIKEREGEXP_POSITIONREGEXP_REPLACE Type Checking Functions ISARRAYISATOMISBOOLEANISNUMBERISOBJECTISSTRING TYPE Type Conversion Functions TOARRAYTOATOMTOBOOLEANTONUMBERTOOBJECTTOSTRING"},{"location":"n1ql-query-strings/#query-parameters","title":"Query Parameters","text":"

        You can provide runtime parameters to your SQL++ query to make it more flexible.

        To specify substitutable parameters within your query string prefix the name with $, $type \u2014 see Example 18.

        Example 18. Running a SQL++ Query

        val query = database.createQuery(\n    \"SELECT META().id AS id FROM _ WHERE type = \\$type\"\n) \n\nquery.parameters = Parameters().setString(\"type\", \"hotel\") \n\nreturn query.execute().allResults()\n
        1. Define a parameter placeholder $type
        2. Set the value of the $type parameter
        "},{"location":"n1ql-server-differences/","title":"SQL++ Server Differences","text":"

        Differences between Couchbase Server SQL++ and Couchbase Lite SQL++

        Important

        N1QL is Couchbase\u2019s implementation of the developing SQL++ standard. As such the terms N1QL and SQL++ are used interchangeably in Couchbase documentation unless explicitly stated otherwise.

        There are several minor but notable behavior differences between SQL++ for Mobile queries and SQL++ for Server, as shown in Table 1.

        In some instances, if required, you can force SQL++ for Mobile to work in the same way as SQL++ for Server. This table compares Couchbase Server and Mobile instances:

        Table 1. SQL++ Query Comparison

        Feature SQL++ for Couchbase Server SQL++ for Mobile Scopes and Collections SELECT *FROM travel-sample.inventory.airport SELECT *FROM inventory.airport Scopes and Collections SELECT *FROM travel-sample.inventory.airport SELECT *FROM inventory.airport USE KEYS SELECT fname, email FROM tutorial USE KEYS [\"dave\", \"ian\"]; SELECT fname, email FROM tutorial WHERE meta().id IN (\"dave\", \"ian\"); ON KEYS SELECT * FROM `user` uJOIN orders o ON KEYS ARRAY s.order_idFOR s IN u.order_history END; SELECT * FROM user u, u.order_history sJOIN orders o ON s.order_id = meta(o).id; ON KEY SELECT * FROM `user` uJOIN orders o ON KEY o.user_id FOR u; SELECT * FROM user uJOIN orders o ON meta(u).id = o.user_id; NEST SELECT * FROM `user` uNEST orders ordersON KEYS ARRAY s.order_idFOR s IN u.order_history END; NEST/UNNEST not supported LEFT OUTER NEST SELECT * FROM user uLEFT OUTER NEST orders ordersON KEYS ARRAY s.order_idFOR s IN u.order_history END; NEST/UNNEST not supported ARRAY ARRAY i FOR i IN [1, 2] END (SELECT VALUE i FROM [1, 2] AS i) ARRAY FIRST FIRST v FOR v IN arr arr[0] LIMIT l OFFSET o Does not allow OFFSET without LIMIT Allows OFFSET without LIMIT UNION, INTERSECT, and EXCEPT All three are supported (with ALL and DISTINCT variants) Not supported OUTER JOIN Both LEFT and RIGHT OUTER JOIN supported Only LEFT OUTER JOIN supported (and necessary for query expressability) <, <=, =, etc. operators Can compare either complex values or scalar values Only scalar values may be compared ORDER BY Result sequencing is based on specific rules described in SQL++ (server) OrderBy clause Result sequencing is based on the SQLite ordering described in SQLite select overviewThe ordering of Dictionary and Array objects is based on binary ordering. SELECT DISTINCT Supported SELECT DISTINCT VALUE is supported when the returned values are scalars CREATE INDEX Supported Not Supported INSERT/\u200bUPSERT/\u200bDELETE Supported Not Supported"},{"location":"n1ql-server-differences/#boolean-logic-rules","title":"Boolean Logic Rules","text":"SQL++ for Couchbase Server SQL++ for Mobile Couchbase Server operates in the same way as Couchbase Lite, except:
        • MISSING, NULL and FALSE are FALSE
        • Numbers 0 is FALSE
        • Empty strings, arrays, and objects are FALSE
        • All other values are TRUE
        You can choose to use Couchbase Server\u2019s SQL++ rules by using the TOBOOLEAN(expr) function to convert a value to its boolean value. SQL++ for Mobile\u2019s boolean logic rules are based on SQLite\u2019s, so:
        • TRUE is TRUE, and FALSE is FALSE
        • Numbers 0 or 0.0 are FALSE
        • Arrays and dictionaries are FALSE
        • String and Blob are TRUE if the values are casted as a non-zero or FALSE if the values are casted as 0 or 0.0 \u2014 see SQLITE\u2019s CAST and Boolean expressions for more details)
        • NULL is FALSE
        • MISSING is MISSING
        "},{"location":"n1ql-server-differences/#logical-operations","title":"Logical Operations","text":"

        In SQL++ for Mobile logical operations will return one of three possible values: TRUE, FALSE, or MISSING.

        Logical operations with the MISSING value could result in TRUE or FALSE if the result can be determined regardless of the missing value, otherwise the result will be MISSING.

        In SQL++ for Mobile \u2014 unlike SQL++ for Server \u2014 NULL is implicitly converted to FALSE before evaluating logical operations. Table 2 summarizes the result of logical operations with different operand values and also shows where the Couchbase Server behavior differs.

        Table 2. Logical Operations Comparison

        Operanda SQL++ for Mobile SQL++ for Server b a AND b a OR b b a AND b a OR b TRUE TRUE TRUE TRUE - - - FALSE FALSE TRUE - - - NULL FALSE TRUE - NULL - MISSING MISSING TRUE - - - FALSE TRUE FALSE TRUE - - - FALSE FALSE FALSE - - - NULL FALSE FALSE - - NULL MISSING FALSE MISSING - - - NULL TRUE FALSE TRUE - NULL - FALSE FALSE FALSE - - NULL NULL FALSE FALSE - NULL NULL MISSING FALSE MISSING - MISSING NULL MISSING TRUE MISSING TRUE - - - FALSE FALSE MISSING - - - NULL FALSE MISSING - MISSING NULL MISSING MISSING MISSING - - -"},{"location":"n1ql-server-differences/#crud-operations","title":"CRUD Operations","text":"

        SQL++ for Mobile only supports Read or Query operations.

        SQL++ for Server fully supports CRUD operation.

        "},{"location":"n1ql-server-differences/#functions","title":"Functions","text":""},{"location":"n1ql-server-differences/#division-operator","title":"Division Operator","text":"SQL++ for Server SQL++ for Mobile SQL++ for Server always performs float division regardless of the types of the operands.You can force this behavior in SQL++ for Mobile by using the DIV(x, y) function. The operand types determine the division operation performed.If both are integers, integer division is used.If one is a floating number, then float division is used."},{"location":"n1ql-server-differences/#round-function","title":"Round Function","text":"SQL++ for Server SQL++ for Mobile SQL++ for Server ROUND() uses the Rounding to Nearest Even convention (for example, ROUND(1.85) returns 1.8).You can force this behavior in Couchbase Lite by using the ROUND_EVEN() function. The ROUND() function returns a value to the given number of integer digits to the right of the decimal point (left if digits is negative).
        • Digits are 0 if not given.
        • Midpoint values are handled using the Rounding Away From Zero convention, which rounds them to the next number away from zero (for example, ROUND(1.85) returns 1.9).
        "},{"location":"paging/","title":"Paging","text":"

        The paging extensions are built on Cash App's Multiplatform Paging, Google's AndroidX Paging with multiplatform support. Kotbase Paging provides a PagingSource which performs limit/offset paging queries based on a user-supplied database query.

        "},{"location":"paging/#installation","title":"Installation","text":"Enterprise EditionCommunity Edition build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee-paging:3.1.3-1.1.0\")\n        }\n    }\n}\n
        build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-paging:3.1.3-1.1.0\")\n        }\n    }\n}\n
        "},{"location":"paging/#usage","title":"Usage","text":"
        // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval select = select(Meta.id, \"type\", \"name\")\nval mapper = { json: String ->\n    Json.decodeFromString<Hotel>(json)\n}\nval queryProvider: From.() -> LimitRouter = {\n    where {\n        (\"type\" equalTo \"hotel\") and\n        (\"state\" equalTo \"California\")\n    }\n    .orderBy { \"name\".ascending() }\n}\n\nval pagingSource = QueryPagingSource(\n    EmptyCoroutineContext,\n    select,\n    collection,\n    mapper,\n    queryProvider\n)\n
        "},{"location":"passive-peer/","title":"Passive Peer","text":"

        How to set up a listener to accept a replicator connection and sync using peer-to-peer

        Android enablers

        Allow Unencrypted Network Traffic

        To use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

        iOS Restrictions

        iOS 14 Applications

        When your application attempts to access the user\u2019s local network, iOS will prompt them to allow (or deny) access. You can customize the message presented to the user by editing the description for the NSLocalNetworkUsageDescription key in the Info.plist.

        Use Background Threads

        As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

        Code Snippets

        All code examples are indicative only. They demonstrate the basic concepts and approaches to using a feature. Use them as inspiration and adapt these examples to best practice when developing applications for your platform.

        "},{"location":"passive-peer/#introduction","title":"Introduction","text":"

        This is an Enterprise Edition feature.

        This content provides code and configuration examples covering the implementation of Peer-to-Peer Sync over WebSockets. Specifically, it covers the implementation of a Passive Peer.

        Couchbase\u2019s Passive Peer (also referred to as the server, or listener) will accept a connection from an Active Peer (also referred to as the client or replicator) and replicate database changes to synchronize both databases.

        Subsequent sections provide additional details and examples for the main configuration options.

        Secure Storage

        The use of TLS, its associated keys and certificates requires using secure storage to minimize the chances of a security breach. The implementation of this storage differs from platform to platform \u2014 see Using Secure Storage.

        "},{"location":"passive-peer/#configuration-summary","title":"Configuration Summary","text":"

        You should configure and initialize a listener for each Couchbase Lite database instance you want to sync. There is no limit on the number of listeners you may configure \u2014 Example 1 shows a simple initialization and configuration process.

        Example 1. Listener configuration and initialization

        val listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        collections = collections,\n        port = 55990,\n        networkInterface = \"wlan0\",\n\n        enableDeltaSync = false,\n\n        // Configure server security\n        disableTls = false,\n\n        // Use an Anonymous Self-Signed Cert\n        identity = null,\n\n        // Configure Client Security using an Authenticator\n        // For example, Basic Authentication\n        authenticator = ListenerPasswordAuthenticator { usr, pwd ->\n            (usr === validUser) && (pwd.concatToString() == validPass)\n        }\n    )\n)\n\n// Start the listener\nlistener.start()\n
        1. Identify the collections from the local database to be used \u2014 see Initialize the Listener Configuration
        2. Optionally, choose a port to use. By default, the system will automatically assign a port \u2014 to override this, see Set Port and Network Interface
        3. Optionally, choose a network interface to use. By default, the system will listen on all network interfaces \u2014 to override this see Set Port and Network Interface
        4. Optionally, choose to sync only changes. The default is not to enable delta-sync \u2014 see Delta Sync
        5. Set server security. TLS is always enabled instantly, so you can usually omit this line. But you can, optionally, disable TLS (not advisable in production) \u2014 see TLS Security
        6. Set the credentials this server will present to the client for authentication. Here we show the default TLS authentication, which is an anonymous self-signed certificate. The server must always authenticate itself to the client.
        7. Set client security \u2014 define the credentials the server expects the client to present for authentication. Here we show how basic authentication is configured to authenticate the client-supplied credentials from the http authentication header against valid credentials \u2014 see Authenticating the Client for more options. Note that client authentication is optional.
        8. Initialize the listener using the configuration settings.
        9. Start Listener
        "},{"location":"passive-peer/#device-discovery","title":"Device Discovery","text":"

        This phase is optional: If the listener is initialized on a well-known URL endpoint (for example, a static IP address or well-known DNS address) then you can configure Active Peers to connect to those.

        Before initiating the listener, you may execute a peer discovery phase. For the Passive Peer, this involves advertising the service using, for example, Network Service Discovery on Android or Bonjour on iOS and waiting for an invite from the Active Peer. The connection is established once the Passive Peer has authenticated and accepted an Active Peer\u2019s invitation.

        "},{"location":"passive-peer/#initialize-the-listener-configuration","title":"Initialize the Listener Configuration","text":"

        Initialize the listener configuration with the collections to sync from the local database \u2014 see Example 2. All other configuration values take their default setting.

        Each listener instance serves one Couchbase Lite database. Couchbase sets no hard limit on the number of listeners you can initialize.

        Example 2. Specify Local Database

        collections = collections,\n

        Set the local database using the URLEndpointListenerConfiguration's constructor URLEndpointListenerConfiguration(Database). The database must be opened before the listener is started.

        "},{"location":"passive-peer/#set-port-and-network-interface","title":"Set Port and Network Interface","text":""},{"location":"passive-peer/#port-number","title":"Port number","text":"

        The Listener will automatically select an available port if you do not specify one \u2014 see Example 3 for how to specify a port.

        Example 3. Specify a port

        port = 55990,\n

        To use a canonical port \u2014 one known to other applications \u2014 specify it explicitly using the port property shown here. Ensure that firewall rules do not block any port you do specify.

        "},{"location":"passive-peer/#network-interface","title":"Network Interface","text":"

        The listener will listen on all network interfaces by default.

        Example 4. Specify a Network Interface to Use

        networkInterface = \"wlan0\",\n

        To specify an interface \u2014 one known to other applications \u2014 identify it explicitly, using the networkInterface property shown here. This must be either an IP address or network interface name such as en0.

        "},{"location":"passive-peer/#delta-sync","title":"Delta Sync","text":"

        Delta Sync allows clients to sync only those parts of a document that have changed. This can result in significant bandwidth consumption savings and throughput improvements. Both are valuable benefits, especially when network bandwidth is constrained.

        Example 5. Enable delta sync

        enableDeltaSync = false,\n

        Delta sync replication is not enabled by default. Use URLEndpointListenerConfiguration's isDeltaSyncEnabled property to activate or deactivate it.

        "},{"location":"passive-peer/#tls-security","title":"TLS Security","text":""},{"location":"passive-peer/#enable-or-disable-tls","title":"Enable or Disable TLS","text":"

        Define whether the connection is to use TLS or clear text.

        TLS-based encryption is enabled by default, and this setting ought to be used in any production environment. However, it can be disabled. For example, for development or test environments.

        When TLS is enabled, Couchbase Lite provides several options on how the listener may be configured with an appropriate TLS Identity \u2014 see Configure TLS Identity for Listener.

        Note

        On the Android platform, to use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

        You can use URLEndpointListenerConfiguration's isTlsDisabled method to disable TLS communication if necessary.

        The isTlsDisabled setting must be false when Client Cert Authentication is required.

        Basic Authentication can be used with, or without, TLS.

        isTlsDisabled works in conjunction with TLSIdentity, to enable developers to define the key and certificate to be used.

        • If isTlsDisabled is true \u2014 TLS communication is disabled and TLS identity is ignored. Active peers will use the ws:// URL scheme used to connect to the listener.
        • If isTlsDisabled is false or not specified \u2014 TLS communication is enabled. Active peers will use the wss:// URL scheme to connect to the listener.
        "},{"location":"passive-peer/#configure-tls-identity-for-listener","title":"Configure TLS Identity for Listener","text":"

        Define the credentials the server will present to the client for authentication. Note that the server must always authenticate itself with the client \u2014 see Authenticating the Listener on Active Peer for how the client deals with this.

        Use URLEndpointListenerConfiguration's tlsIdentity property to configure the TLS Identity used in TLS communication.

        If TLSIdentity is not set, then the listener uses an auto-generated anonymous self-signed identity (unless isTlsDisabled = true). Whilst the client cannot use this to authenticate the server, it will use it to encrypt communication, giving a more secure option than non-TLS communication.

        The auto-generated anonymous self-signed identity is saved in secure storage for future use to obviate the need to re-generate it.

        Note

        Typically, you will configure the listener\u2019s TLS Identity once during the initial launch and re-use it (from secure storage on any subsequent starts.

        Here are some example code snippets showing:

        • Importing a TLS identity \u2014 see Example 6
        • Setting TLS identity to expect self-signed certificate \u2014 see Example 7
        • Setting TLS identity to expect anonymous certificate \u2014 see Example 8

        Example 6. Import Listener\u2019s TLS identity

        TLS identity certificate import APIs are platform-specific.

        AndroidiOS/macOSJVM in androidMain
        config.isTlsDisabled = false\n\nKeyStoreUtils.importEntry(\n    \"PKCS12\",\n    context.assets.open(\"cert.p12\"),\n    \"store-password\".toCharArray(),\n    \"store-alias\",\n    \"key-password\".toCharArray(),\n    \"new-alias\"\n)\n\nconfig.tlsIdentity = TLSIdentity.getIdentity(\"new-alias\")\n
        in appleMain
        config.isTlsDisabled = false\n\nval path = NSBundle.mainBundle.pathForResource(\"cert\", ofType = \"p12\") ?: return\n\nval certData = NSData.dataWithContentsOfFile(path) ?: return\n\nval tlsIdentity = TLSIdentity.importIdentity(\n    data = certData.toByteArray(),\n    password = \"123\".toCharArray(),\n    alias = \"alias\"\n)\n\nconfig.tlsIdentity = tlsIdentity\n
        in jvmMain
        config.isTlsDisabled = false\n\nval keyStore = KeyStore.getInstance(\"PKCS12\")\nFiles.newInputStream(Path(\"cert.p12\")).use { keyStream ->\n    keyStore.load(\n        keyStream,\n        \"keystore-password\".toCharArray()\n    )\n}\n\nconfig.tlsIdentity = TLSIdentity.getIdentity(keyStore, \"alias\", \"keyPass\".toCharArray())\n
        1. Ensure TLS is used
        2. Get key and certificate data
        3. Use the retrieved data to create and store the TLS identity
        4. Set this identity as the one presented in response to the client\u2019s prompt

        Example 7. Create Self-Signed Cert

        CommonJVM in commonMain
        config.isTlsDisabled = false\n\nval attrs = mapOf(\n    TLSIdentity.CERT_ATTRIBUTE_COMMON_NAME to \"Couchbase Demo\",\n    TLSIdentity.CERT_ATTRIBUTE_ORGANIZATION to \"Couchbase\",\n    TLSIdentity.CERT_ATTRIBUTE_ORGANIZATION_UNIT to \"Mobile\",\n    TLSIdentity.CERT_ATTRIBUTE_EMAIL_ADDRESS to \"noreply@couchbase.com\"\n)\n\nval tlsIdentity = TLSIdentity.createIdentity(\n    true,\n    attrs,\n    Clock.System.now() + 1.days,\n    \"cert-alias\"\n)\n\nconfig.tlsIdentity = tlsIdentity\n
        in jvmMain
        // On the JVM platform, before calling\n// common TLSIdentity.createIdentity() or getIdentity()\n// load a KeyStore to use\nval keyStore = KeyStore.getInstance(\"PKCS12\")\nkeyStore.load(null, null)\nTLSIdentity.useKeyStore(keyStore)\n
        1. Ensure TLS is used.
        2. Map the required certificate attributes.
        3. Create the required TLS identity using the attributes. Add to secure storage as 'cert-alias'.
        4. Configure the server to present the defined identity credentials when prompted.

        Example 8. Use Anonymous Self-Signed Certificate

        This example uses an anonymous self-signed certificate. Generated certificates are held in secure storage.

        config.isTlsDisabled = false\n\n// Use an Anonymous Self-Signed Cert\nconfig.tlsIdentity = null\n
        1. Ensure TLS is used. This is the default setting.
        2. Authenticate using an anonymous self-signed certificate. This is the default setting.
        "},{"location":"passive-peer/#authenticating-the-client","title":"Authenticating the Client","text":"

        In this section Use Basic Authentication | Using Client Certificate Authentication | Delete Entry | The Impact of TLS Settings

        Define how the server (listener) will authenticate the client as one it is prepared to interact with.

        Whilst client authentication is optional, Couchbase Lite provides the necessary tools to implement it. Use the URLEndpointListenerConfiguration class\u2019s authenticator property to specify how the client-supplied credentials are to be authenticated.

        Valid options are:

        • No authentication \u2014 If you do not define a ListenerAuthenticator then all clients are accepted.
        • Basic Authentication \u2014 uses the ListenerPasswordAuthenticator to authenticate the client using the client-supplied username and password (from the http authentication header).
        • ListenerCertificateAuthenticator \u2014 which authenticates the client using a client supplied chain of one or more certificates. You should initialize the authenticator using one of the following constructors:
          • A list of one or more root certificates \u2014 the client supplied certificate must end at a certificate in this list if it is to be authenticated
          • A block of code that assumes total responsibility for authentication \u2014 it must return a boolean response (true for an authenticated client, or false for a failed authentication).
        "},{"location":"passive-peer/#use-basic-authentication","title":"Use Basic Authentication","text":"

        Define how to authenticate client-supplied username and password credentials. To use client-supplied certificates instead \u2014 see Using Client Certificate Authentication

        Example 9. Password authentication

        config.authenticator = ListenerPasswordAuthenticator { username, password ->\n    username == validUser && password.concatToString() == validPassword\n}\n

        Where username/password are the client-supplied values (from the http-authentication header) and validUser/validPassword are the values acceptable to the server.

        "},{"location":"passive-peer/#using-client-certificate-authentication","title":"Using Client Certificate Authentication","text":"

        Define how the server will authenticate client-supplied certificates.

        There are two ways to authenticate a client:

        • A chain of one or more certificates that ends at a certificate in the list of certificates supplied to the constructor for ListenerCertificateAuthenticator \u2014 see Example 10
        • Application logic: This method assumes complete responsibility for verifying and authenticating the client \u2014 see Example 11 If the parameter supplied to the constructor for ListenerCertificateAuthenticator is of type ListenerCertificateAuthenticatorDelegate, all other forms of authentication are bypassed. The client response to the certificate request is passed to the method supplied as the constructor parameter. The logic should take the form of a function or lambda.

        Example 10. Set Certificate Authorization

        Configure the server (listener) to authenticate the client against a list of one or more certificates provided by the server to the ListenerCertificateAuthenticator.

        // Configure the client authenticator\n// to validate using ROOT CA\n// validId.certs is a list containing a client cert to accept\n// and any other certs needed to complete a chain between\n// the client cert and a CA\nval validId = TLSIdentity.getIdentity(\"Our Corporate Id\")\n    ?: throw IllegalStateException(\"Cannot find corporate id\")\n\n// accept only clients signed by the corp cert\nval listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        // get the identity \n        collections = collections,\n        identity = validId,\n        authenticator = ListenerCertificateAuthenticator(validId.certs)\n    )\n)\n
        1. Get the identity data to authenticate against. This can be, for example, from a resource file provided with the app, or an identity previously saved in secure storage.
        2. Configure the authenticator to authenticate the client supplied certificate(s) using these root certs. A valid client will provide one or more certificates that match a certificate in this list.
        3. Add the authenticator to the listener configuration.

        Example 11. Application Logic

        Configure the server (listener) to authenticate the client using user-supplied logic.

        // Configure authentication using application logic\nval corpId = TLSIdentity.getIdentity(\"OurCorp\")\n    ?: throw IllegalStateException(\"Cannot find corporate id\")\n\nconfig.tlsIdentity = corpId\n\nconfig.authenticator = ListenerCertificateAuthenticator { certs ->\n    // supply logic that returns boolean\n    // true for authenticate, false if not\n    // For instance:\n    certs[0].contentEquals(corpId.certs[0])\n}\n
        1. Get the identity data to authenticate against. This can be, for example, from a resource file provided with the app, or an identity previously saved in secure storage.
        2. Configure the authenticator to pass the root certificates to a user supplied code block. This code assumes complete responsibility for authenticating the client supplied certificate(s). It must return a boolean value; with true denoting the client supplied certificate authentic.
        3. Add the authenticator to the listener configuration.
        "},{"location":"passive-peer/#delete-entry","title":"Delete Entry","text":"

        You can remove unwanted TLS identities from secure storage using the convenience API.

        Example 12. Deleting TLS Identities

        TLSIdentity.deleteIdentity(\"cert-alias\")\n
        "},{"location":"passive-peer/#the-impact-of-tls-settings","title":"The Impact of TLS Settings","text":"

        The table in this section shows the expected system behavior (in regards to security) depending on the TLS configuration settings deployed.

        Table 1. Expected system behavior

        isTlsDisabled tlsIdentity (corresponding to server) Expected system behavior true Ignored TLS is disabled; all communication is plain text. false Set to null
        • The system will auto generate an anonymous self-signed cert.
        • Active Peers (clients) should be configured to accept self-signed certificates.
        • Communication is encrypted.
        false Set to server identity generated from a self- or CA-signed certificate
        • On first use \u2014 Bring your own certificate and private key; for example, using the TLSIdentity class\u2019s createIdentity() method to add it to the secure storage.
        • Each time \u2014 Use the server identity from the certificate stored in the secure storage; for example, using the TLSIdentity class\u2019s getIdentity() method with the alias you want to retrieve.
        • System will use the configured identity.
        • Active Peers will validate the server certificate corresponding to the TLSIdentity (as long as they are configured to not skip validation \u2014 see TLS Security).
        "},{"location":"passive-peer/#start-listener","title":"Start Listener","text":"

        Once you have completed the listener\u2019s configuration settings you can initialize the listener instance and start it running \u2014 see Example 13.

        Example 13. Initialize and start listener

        // Initialize the listener\nval listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        collections = collections,\n        port = 55990,\n        networkInterface = \"wlan0\",\n\n        enableDeltaSync = false,\n\n        // Configure server security\n        disableTls = false,\n\n        // Use an Anonymous Self-Signed Cert\n        identity = null,\n\n        // Configure Client Security using an Authenticator\n        // For example, Basic Authentication\n        authenticator = ListenerPasswordAuthenticator { usr, pwd ->\n            (usr === validUser) && (pwd.concatToString() == validPass)\n        }\n    )\n)\n\n// Start the listener\nlistener.start()\n
        "},{"location":"passive-peer/#monitor-listener","title":"Monitor Listener","text":"

        Use the listener\u2019s status property to get counts of total and active connections \u2014 see Example 14.

        You should note that these counts can be extremely volatile. So, the actual number of active connections may have changed, by the time the ConnectionStatus class returns a result.

        Example 14. Get connection counts

        val connectionCount = listener.status?.connectionCount\nval activeConnectionCount = listener.status?.activeConnectionCount\n
        "},{"location":"passive-peer/#stop-listener","title":"Stop Listener","text":"

        It is best practice to check the status of the listener\u2019s connections and stop only when you have confirmed that there are no active connections \u2014 see Example 15.

        Example 15. Stop listener using stop method

        listener.stop()\n

        Note

        Closing the database will also close the listener.

        "},{"location":"peer-to-peer-sync/","title":"Peer-to-Peer Sync","text":"

        Couchbase Lite\u2019s Peer-to-Peer Synchronization enables edge devices to synchronize securely without consuming centralized cloud-server resources

        "},{"location":"peer-to-peer-sync/#introduction","title":"Introduction","text":"

        This is an Enterprise Edition feature.

        Couchbase Lite\u2019s Peer-to-Peer synchronization solution offers secure storage and bidirectional data synchronization between edge devices without needing a centralized cloud-based control point.

        Couchbase Lite\u2019s Peer-to-Peer data synchronization provides:

        • Instant WebSocket-based listener for use in Peer-to-Peer applications communicating over IP-based networks
        • Simple application development, enabling sync with a short amount of code
        • Optimized network bandwidth usage and reduced data transfer costs with Delta Sync support
        • Securely sync data with built-in support for Transport Layer Security (TLS) encryption and authentication support
        • Document management. Reducing conflicts in concurrent writes with built-in conflict management support
        • Built-in network resiliency
        "},{"location":"peer-to-peer-sync/#overview","title":"Overview","text":"

        Peer-to-Peer synchronization requires one Peer to act as the Listener to the other Peer\u2019s replicator.

        Peer-to-Peer synchronization requires one Peer to act as the Listener to the other Peer\u2019s replicator. Therefore, to use Peer-to-Peer synchronization in your application, you must configure one Peer to act as a Listener using the Couchbase Listener API, the most important of which include URLEndpointListener and URLEndpointListenerConfiguration.

        Example 1. Simple workflow

        1. Configure the listener (passive peer, or server)
        2. Initialize the listener, which listens for incoming WebSocket connections (on a user-defined, or auto-selected, port)
        3. Configure a replicator (active peer, or client)
        4. Use some form of discovery phase, perhaps with a zero-config protocol such as Network Service Discovery for Android or Bonjour for iOS, or use known URL endpoints, to identify a listener
        5. Point the replicator at the listener
        6. Initialize the replicator
        7. Replicator and listener engage in the configured security protocol exchanges to confirm connection
        8. If connection is confirmed then replication will commence, synchronizing the two data stores

        Here you can see configuration involves a Passive Peer and an Active Peer and a user-friendly listener configuration in Basic Setup.

        You can also learn how to implement Peer-to-Peer synchronization by referring to our tutorial \u2014 see Getting Started with Peer-to-Peer Synchronization.

        "},{"location":"peer-to-peer-sync/#features","title":"Features","text":"

        Couchbase Lite\u2019s Peer-to-Peer synchronization solution provides support for cross-platform synchronization, for example, between Android and iOS devices.

        Each listener instance serves one Couchbase Lite database. However, there is no hard limit on the number of listener instances you can associate with a database.

        Having a listener on a database still allows you to open replications to the other clients. For example, a listener can actively begin replicating to other listeners while listening for connections. These replications can be for the same or a different database.

        The listener will automatically select a port to use or a user-specified port. It will also listen on all available networks, unless you specify a specific network.

        "},{"location":"peer-to-peer-sync/#security","title":"Security","text":"

        Couchbase Lite\u2019s Peer-to-Peer synchronization supports encryption and authentication over TLS with multiple modes, including:

        • No encryption, for example, clear text.
        • CA cert
        • Self-signed cert
        • Anonymous self-signed \u2014 an auto-generated anonymous TLS identity is generated if no identity is specified. This TLS identity provides encryption but not authentication. Any self-signed certificates generated by the convenience API are stored in secure storage.

        The replicator (client) can handle certificates pinned by the listener for authentication purposes.

        Support is also provided for basic authentication using username and password credentials. Whilst this can be used in clear text mode, developers are strongly advised to use TLS encryption.

        For testing and development purposes, support is provided for the client (active, replicator) to skip verification of self-signed certificates; this mode should not be used in production.

        "},{"location":"peer-to-peer-sync/#error-handling","title":"Error Handling","text":"

        When a listener is stopped, then all connected replicators are notified by a WebSocket error. Your application should distinguish between transient and permanent connectivity errors.

        "},{"location":"peer-to-peer-sync/#passive-peers","title":"Passive peers","text":"

        A Passive Peer losing connectivity with an Active Peer will clean up any associated endpoint connections to that peer. The Active Peer may attempt to reconnect to the Passive Peer.

        "},{"location":"peer-to-peer-sync/#active-peers","title":"Active peers","text":"

        An Active Peer permanently losing connectivity with a Passive Peer will cease replicating.

        An Active Peer temporarily losing connectivity with a passive Peer will use exponential backoff functionality to attempt reconnection.

        "},{"location":"peer-to-peer-sync/#delta-sync","title":"Delta Sync","text":"

        Optional delta-sync support is provided but is inactive by default.

        Delta-sync can be enabled on a per-replication basis provided that the databases involved are also configured to permit it. Statistics on delta-sync usage are available, including the total number of revisions sent as deltas.

        "},{"location":"peer-to-peer-sync/#conflict-resolution","title":"Conflict Resolution","text":"

        Conflict resolution for Peer-to-Peer synchronization works in the same way as it does for Sync Gateway replication, with both custom and automatic resolution available.

        "},{"location":"peer-to-peer-sync/#basic-setup","title":"Basic Setup","text":"

        You can configure a Peer-to-Peer synchronization with just a short amount of code as shown here in Example 2 and Example 3.

        Example 2. Simple Listener

        This simple listener configuration will give you a listener ready to participate in an encrypted synchronization with a replicator providing a valid username and password.

        val listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        collections = db.collections,\n        authenticator = ListenerPasswordAuthenticator { user, pwd ->\n            (user == \"daniel\") && (pwd.concatToString() == \"123\")\n        }\n    )\n)\nlistener.start()\nthis.listener = listener\n
        1. Initialize the listener configuration
        2. Configure the client authenticator to require basic authentication
        3. Initialize the listener
        4. Start the listener

        Example 3. Simple Replicator

        This simple replicator configuration will give you an encrypted, bi-directional Peer-to-Peer synchronization with automatic conflict resolution.

        val listenerEndpoint = URLEndpoint(\"wss://10.0.2.2:4984/db\") \nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        collections = mapOf(collections to null),\n        target = listenerEndpoint,\n        authenticator = BasicAuthenticator(\"valid.user\", \"valid.password.string\".toCharArray()),\n        acceptOnlySelfSignedServerCertificate = true\n    )\n)\nrepl.start() \nthis.replicator = repl\n
        1. Get the listener\u2019s endpoint. Here we use a known URL, but it could be a URL established dynamically in a discovery phase.
        2. Initialize the replicator configuration with the collections of the database to be synchronized and the listener it is to synchronize with.
        3. Configure the replicator to expect a self-signed certificate from the listener.
        4. Configure the replicator to present basic authentication credentials if the listener prompts for them (client authentication is optional).
        5. Initialize the replicator.
        6. Start the replicator.
        "},{"location":"peer-to-peer-sync/#api-highlights","title":"API Highlights","text":""},{"location":"peer-to-peer-sync/#urlendpointlistener","title":"URLEndpointListener","text":"

        The URLEndpointListener is the listener for peer-to-peer synchronization. It acts like a passive replicator, in the same way that Sync Gateway does in a 'standard' replication. On the client side, the listener\u2019s endpoint is used to point the replicator to the listener.

        Core functionalities of the listener are:

        • Users can initialize the class using a URLEndpointListenerConfiguration object.
        • The listener can be started, or can be stopped.
        • Once the listener is started, a total number of connections or active connections can be checked.
        "},{"location":"peer-to-peer-sync/#urlendpointlistenerconfiguration","title":"URLEndpointListenerConfiguration","text":"

        Use URLEndpointListenerConfiguration to create a configuration object you can then use to initialize the listener.

        port

        This is the port that the listener will listen to.

        If the port is zero, the listener will auto-assign an available port to listen on.

        Default value is zero. When the listener is not started, the port zero.

        networkInterface

        Use this to select a specific Network Interface to use, in the form of the IP Address or network interface name.

        If the network interface is specified, only that interface wil be used.

        If the network interface is not specified, all available network interfaces will be used.

        The value is null if the listener is not started.

        isTlsDisabled

        You can use URLEndpointListenerConfiguration's isTlsDisabled property to disable TLS communication if necessary.

        The isTlsDisabled setting must be false when Client Cert Authentication is required.

        Basic Authentication can be used with, or without, TLS.

        isTlsDisabled works in conjunction with TLSIdentity, to enable developers to define the key and certificate to be used.

        • If isTlsDisabled is true \u2014 TLS communication is disabled and tlsIdentity is ignored. Active peers will use the ws:// URL scheme used to connect to the listener.
        • If isTlsDisabled is false or not specified \u2014 TLS communication is enabled. Active peers will use the wss:// URL scheme to connect to the listener.

        tlsIdentity

        Use URLEndpointListenerConfiguration's tlsIdentity property to configure the TLS Identity used in TLS communication.

        If TLSIdentity is not set, then the listener uses an auto-generated anonymous self-signed identity (unless isTlsDisabled = true). Whilst the client cannot use this to authenticate the server, it will use it to encrypt communication, giving a more secure option than non-TLS communication.

        The auto-generated anonymous self-signed identity is saved in secure storage for future use to obviate the need to re-generate it.

        When the listener is not started, the identity is null. When TLS is disabled, the identity is always null.

        authenticator

        Use this to specify the authenticator the listener uses to authenticate the client\u2019s connection request. This should be set to one of the following:

        • ListenerPasswordAuthenticator
        • ListenerCertificateAuthenticator
        • null \u2014 there is no authentication

        isReadOnly

        Use this to allow only pull replication. The default value is false.

        isDeltaSyncEnabled

        The option to enable Delta Sync and replicate only changed data also depends on the delta sync settings at database level. The default value is false.

        "},{"location":"peer-to-peer-sync/#security_1","title":"Security","text":""},{"location":"peer-to-peer-sync/#authentication","title":"Authentication","text":"

        Peer-to-Peer sync supports Basic Authentication and TLS Authentication. For anything other than test deployments, we strongly encourage the use of TLS. In fact, Peer-to-Peer sync using URLEndpointListener is encrypted using TLS by default.

        The authentication mechanism is defined at the endpoint level, meaning that it is independent of the database being replicated. For example, you may use basic authentication on one instance and TLS authentication on another when replicating multiple database instances.

        Note

        The minimum supported version of TLS is TLS 1.2.

        Peer-to-Peer synchronization using URLEndpointListener supports certificate based authentication of the server and-or listener:

        • Replicator certificates can be: self-signed, from trusted CA, or anonymous (system generated).
        • Listeners certificates may be: self-signed or trusted CA signed. Where a TLS certificate is not explicitly specified for the listener, the listener implementation will generate anonymous certificate to use for encryption.
        • The URLEndpointListener supports the ability to opt out of TLS encryption communication. Active clients replicating with a URLEndpointListener have the option to skip validation of server certificates when the listener is configured with self-signed certificates. This option is ignored when dealing with CA certificates.
        "},{"location":"peer-to-peer-sync/#using-secure-storage","title":"Using Secure Storage","text":"

        TLS and its associated keys and certificates might require using secure storage to minimize the chances of a security breach. The implementation of this storage differs from platform to platform. Table 1 summarizes the secure storage used to store keys and certificates for each platform.

        Table 1. Secure storage details

        Platform Key & Certificate Storage Notes Reference Android Android System KeyStore
        • Android KeyStore was introduced from Android API 18.
        • Android KeyStore security has evolved over time to provide more secure support. Please check this document for more info.
        link MacOS/iOS KeyChain Use kSecAttrLabel of the SecCertificate to store the TLSIdentity\u2019s label link Java User Specified KeyStore
        • The KeyStore represents a storage facility for cryptographic keys and certificates. It\u2019s users\u2019 choice to decide whether to persist the KeyStore or not.
        • The supported KeyStore types are PKCS12 (Default from Java 9) and JKS (Default on Java 8 and below).
        link"},{"location":"platforms/","title":"Supported Platforms","text":"

        Kotbase provides a common Kotlin Multiplatform API for Couchbase Lite, allowing you to develop a single Kotlin shared library, which compiles to native binaries that can be consumed by native apps on each of the supported platforms: Android, JVM, iOS, macOS, Linux, and Windows.

        "},{"location":"platforms/#android-jvm","title":"Android + JVM","text":"

        Kotbase implements support for JVM desktop and Android apps via the Couchbase Lite Java and Android SDKs. Kotbase's API mirrors the Java SDK as much as feasible, which allows for smooth migration for existing Kotlin code currently utilizing either the Java or Android KTX SDKs. See Differences from Couchbase Lite Java SDK for details about where the APIs differ.

        Kotbase will pull in the correct Couchbase Lite Java dependencies via Gradle.

        "},{"location":"platforms/#minification","title":"Minification","text":"

        An application that enables ProGuard minification must ensure that certain pieces of Couchbase Lite library code are not changed.

        Near-minimal rule set that retains the needed code proguard-rules.pro
        -keep class com.couchbase.lite.ConnectionStatus { <init>(...); }\n-keep class com.couchbase.lite.LiteCoreException { static <methods>; }\n-keep class com.couchbase.lite.internal.replicator.CBLTrustManager {\n    public java.util.List checkServerTrusted(java.security.cert.X509Certificate[], java.lang.String, java.lang.String);\n}\n-keep class com.couchbase.lite.internal.ReplicationCollection {\n    static <methods>;\n    <fields>;\n}\n-keep class com.couchbase.lite.internal.core.C4* {\n    static <methods>;\n    <fields>;\n    <init>(...);\n}\n
        "},{"location":"platforms/#android","title":"Android","text":"API x86 x64 ARM32 ARM64 22+"},{"location":"platforms/#jvm","title":"JVM","text":"JDK Linux x64 macOS x64 Windows x64 8+"},{"location":"platforms/#jvm-on-linux","title":"JVM on Linux","text":"

        Targeting JVM running on Linux requires a specific version of the libicu dependency. (You will see an error such as libLiteCore.so: libicuuc.so.71: cannot open shared object file: No such file or directory indicating the expected version.) If the required version isn't available from your distribution's package manager, you can download it from GitHub.

        "},{"location":"platforms/#ios-macos","title":"iOS + macOS","text":"

        Kotbase supports native iOS and macOS apps via the Couchbase Lite Objective-C SDK. Developers with experience using Couchbase Lite in Swift should find Kotbase's API in Kotlin familiar.

        Binaries need to link with the correct version of the CouchbaseLite XCFramework, which can be downloaded here or added via Carthage or CocoaPods. The version should match the major and minor version of Kotbase, e.g. CouchbaseLite 3.1.x for Kotbase 3.1.3-1.1.0.

        The Kotlin CocoaPods Gradle plugin can also be used to generate a Podspec for your project that includes the CouchbaseLite dependency. Use linkOnly = true to link the dependency without generating Kotlin Objective-C interop:

        CocoaPods plugin Enterprise EditionCommunity Edition build.gradle.kts
        plugins {\n    kotlin(\"multiplatform\")\n    kotlin(\"native.cocoapods\")\n}\n\nkotlin {\n    cocoapods {\n        ...\n        pod(\"CouchbaseLite-Enterprise\", version = \"3.1.4\", linkOnly = true)\n    }\n}\n
        build.gradle.kts
        plugins {\n    kotlin(\"multiplatform\")\n    kotlin(\"native.cocoapods\")\n}\n\nkotlin {\n    cocoapods {\n        ...\n        pod(\"CouchbaseLite\", version = \"3.1.4\", linkOnly = true)\n    }\n}\n
        "},{"location":"platforms/#ios","title":"iOS","text":"Version x64 ARM64 10+"},{"location":"platforms/#macos","title":"macOS","text":"Version x64 ARM64 10.14+"},{"location":"platforms/#linux-windows","title":"Linux + Windows","text":"

        Experimental support for Linux and Windows is provided via the Couchbase Lite C SDK. Core functionality should be mostly stable, however these platforms have not been tested in production. There are some tests that have slightly different behavior in a few edge cases and others that are failing that need further debugging. See comments in tests marked @IgnoreLinuxMingw for details.

        There are a few Enterprise Edition features that are not implemented in the Couchbase Lite C SDK. Kotbase will throw an UnsupportedOperationException if these APIs are called from these platforms.

        Binaries need to link with the correct version of the native platform libcblite binary, which can be downloaded here or here. The version should match the major and minor version of Kotbase, e.g. libcblite 3.1.x for Kotbase 3.1.3-1.1.0.

        "},{"location":"platforms/#linux","title":"Linux","text":"

        Linux also requires libz, libicu, and libpthread, which may or may not be installed on your system.

        Targeting Linux requires a specific version of the libicu dependency. (You will see an error such as libLiteCore.so: libicuuc.so.71: cannot open shared object file: No such file or directory indicating the expected version.) If the required version isn't available from your distribution's package manager, you can download it from GitHub.

        Distro Version x64 ARM64 Debian 9+ Raspberry Pi OS 10+ Ubuntu 20.04+"},{"location":"platforms/#windows","title":"Windows","text":"Version x64 10+"},{"location":"prebuilt-database/","title":"Pre-built Database","text":"

        How to include a snapshot of a pre-built database in your Couchbase Lite app package to shorten initial sync time and reduce bandwidth use

        "},{"location":"prebuilt-database/#overview","title":"Overview","text":"

        Couchbase Lite supports pre-built databases. You can pre-load your app with data instead of syncing it from Sync Gateway during startup to minimize consumer wait time (arising from data setup) on initial install and launch of the application.

        Avoiding an initial bulk sync reduces startup time and network transfer costs.

        It is typically more efficient to download bulk data using the http/ftp stream employed during the application installation than to install a smaller application bundle and then use a replicator to pull in the bulk data.

        Pre-loaded data is typically public/shared, non-user-specific data that is static. Even if the data is not static, you can still benefit from preloading it and only syncing the changed documents on startup.

        The initial sync of any pre-built database pulls in any content changes on the server that occurred after its incorporation into the app, updating the database.

        To use a prebuilt database:

        1. Create a new Couchbase Lite database with the required dataset \u2014 see Creating Pre-built Database
        2. Incorporate the pre-built database with your app bundle as an asset/resource \u2014 see Bundle a Database with an Application
        3. Adjust the start-up logic of your app to check for the presence of the required database. If the database doesn\u2019t already exist, create one using the bundled pre-built database. Initiate a sync to update the data \u2014 see Using Pre-built Database on App Launch
        "},{"location":"prebuilt-database/#creating-pre-built-database","title":"Creating Pre-built Database","text":"

        These steps should form part of your build and release process:

        1. Create a fresh Couchbase Lite database (every time)

          Important

          Always start with a fresh database for each app version; this ensures there are no checkpoint issues

          Otherwise: You will invalidate the cached checkpoint in the packaged database, and instead reuse the same database in your build process (for subsequent app versions).

        2. Pull the data from Sync Gateway into the new Couchbase Lite database

          Important

          Ensure the replication used to populate Couchbase Lite database uses the exact same remote URL and replication config parameters (channels and filters) as those your app will use when it is running.

          Otherwise: \u2026 there will be a checkpoint mismatch and the app will attempt to pull the data down again

          Don\u2019t, for instance, create a pre-built database against a staging Sync Gateway server and use it within a production app that syncs against a production Sync Gateway.

          You can use the cblite tool (cblite cp) for this \u2014 see cblite cp (export, import, push, pull) | cblite on GitHub

          Alternatively \u2026

          • You can write a simple CBL app to just initiate the required pull sync \u2014 see Remote Sync Gateway
          • A third party community Java app is available. It provides a UI to create a local Couchbase Lite database and pull data from a Sync Gateway database \u2014 see CouchbaseLite Tester.
        3. Create the same indexes the app will use (wait for the replication to finish before doing this).

        "},{"location":"prebuilt-database/#bundle-a-database-with-an-application","title":"Bundle a Database with an Application","text":"

        Copy the database into your app package.

        Put it in an appropriate place (for example, an assets or resource folder).

        Where the platform permits you can zip the database.

        Alternatively \u2026 rather than bundling the database within the app, the app could pull the database down from a CDN server on launch.

        "},{"location":"prebuilt-database/#database-encryption","title":"Database Encryption","text":"

        This is an Enterprise Edition feature.

        If you are using an encrypted database, Database.copy() does not change the encryption key. The encryption key specified in the config when opening the database is the encryption key used for both the original database and copied database.

        If you copied an un-encrypted database and want to apply encryption to the copy, or if you want to change (or remove) the encryption key applied to the copy:

        1. Provide the original encryption-key (if any) in the database copy\u2019s configuration using DatabaseConfiguration.setEncryptionKey().
        2. Open the database copy.
        3. Use Database.changeEncryptionKey() on the database copy to set the required encryption key. NOTE: To remove encryption on the copy, provide a null encryption-key.
        "},{"location":"prebuilt-database/#using-pre-built-database-on-app-launch","title":"Using Pre-built Database on App Launch","text":"

        During the application start-up logic, check if database exists in the required location, and if not:

        1. Locate the pre-packaged database (for example, in the assets or other resource folder).
        2. Copy the pre-packaged database to the required location.

          Use the API\u2019s Database.copy() method \u2014 see: Example 1; this ensures that a UUID is generated for each copy.

          Important

          Do not copy the database using any other method

          Otherwise: Each copy of the app will invalidate the other apps' checkpoints because a new UUID was not generated.

        3. Open the database; you can now start querying the data and using it.

        4. Start a pull replication, to sync any changes.

          The replicator uses the pre-built database\u2019s checkpoint as the timestamp to sync from; only documents changed since then are synced.

          Important

          If you used cblite to pull the data without including a port number with the URL and are replicating in a Java or iOS (swift/ObjC) app \u2014 you must include the port number in the URL provided to the replication (port 443 for wss:// or 80 for ws://).

          Otherwise: You will get a checkpoint mismatch. This is caused by a URL discrepancy, which arises because cblite automatically adds the default port number when none is specified, but the Java and iOS (swift/ObjC) replicators DO NOT.

          Note

          Start your normal application logic immediately, unless it is essential to have the absolute up-to-date data set to begin. That way the user is not kept hanging around watching a progress indicator. They can begin interacting with your app whilst any out-of-data data is being updated.

        Example 1. Copy database using API

        Note

        Getting the path to a database and package resources is platform-specific.

        You may need to extract the database from your package resources to a temporary directory and then copy it, using Database.copy().

        if (Database.exists(\"travel-sample\") {\n    return\n}\nval pathToPrebuiltDb = getPrebuiltDbPathFromResources()\nDatabase.copy(\n    pathToPrebuiltDb,\n    \"travel-sample\",\n    DatabaseConfiguration()\n)\n
        "},{"location":"query-builder/","title":"QueryBuilder","text":"

        How to use QueryBuilder to build effective queries with Kotbase

        Note

        The examples used here are based on the Travel Sample app and data introduced in the Couchbase Mobile Workshop tutorial.

        "},{"location":"query-builder/#introduction","title":"Introduction","text":"

        Kotbase provides two ways to build and run database queries; the QueryBuilder API described in this topic and SQL++ for Mobile.

        Database queries defined with the QueryBuilder API use the query statement format shown in Example 1. The structure and semantics of the query format are based on Couchbase\u2019s SQL++ query language.

        Example 1. Query Format

        SELECT ____\nFROM 'data-source'\nWHERE ____,\nJOIN ____\nGROUP BY ____\nORDER BY ____\n

        Query Components

        Component Description SELECT statement The document properties that will be returned in the result set FROM The data source to query the documents from \u2014 the collection of the database WHERE statement The query criteriaThe SELECTed properties of documents matching this criteria will be returned in the result set JOIN statement The criteria for joining multiple documents GROUP BY statement The criteria used to group returned items in the result set ORDER BY statement The criteria used to order the items in the result set

        Tip

        We recommend working through the query section of the Couchbase Mobile Workshop tutorial as a good way to build your skills in this area.

        Tip

        The examples in the documentation use the official Couchbase Lite query builder APIs, available in the Kotbase core artifacts. Many queries can take advantage of the concise infix function query builder APIs available in the Kotbase KTX extensions.

        "},{"location":"query-builder/#select-statement","title":"SELECT statement","text":"

        In this section Return All Properties | Return Selected Properties

        Related Result Sets

        Use the SELECT statement to specify which properties you want to return from the queried documents. You can opt to retrieve entire documents, or just the specific properties you need.

        "},{"location":"query-builder/#return-all-properties","title":"Return All Properties","text":"

        Use the SelectResult.all() method to return all the properties of selected documents \u2014 see Example 2.

        Example 2. Using SELECT to Retrieve All Properties

        This query shows how to retrieve all properties from all documents in a collection.

        val queryAll = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n

        The Query.execute() statement returns the results in a dictionary, where the key is the database name \u2014 see Example 3.

        Example 3. ResultSet Format from SelectResult.all()

        [\n  {\n    \"travel-sample\": { // The result for the first document matching the query criteria.\n      \"callsign\": \"MILE-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"Q5\",\n      \"icao\": \"MLA\",\n      \"id\": 10,\n      \"name\": \"40-Mile Air\",\n      \"type\": \"airline\"\n    }\n  },\n  {\n    \"travel-sample\": { // The result for the next document matching the query criteria.\n      \"callsign\": \"ALASKAN-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"AA\",\n      \"icao\": \"AAA\",\n      \"id\": 10,\n      \"name\": \"Alaskan Airways\",\n      \"type\": \"airline\"\n    }\n  }\n]\n

        See Result Sets for more on processing query results.

        "},{"location":"query-builder/#return-selected-properties","title":"Return Selected Properties","text":"

        To access only specific properties, specify a comma-separated list of SelectResult expressions, one for each property, in the select statement of your query \u2014 see Example 4.

        Example 4. Using SELECT to Retrieve Specific Properties

        In this query we retrieve and then print the _id, type, and name properties of each document.

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"name\"),\n        SelectResult.property(\"type\")\n    )\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n    .orderBy(Ordering.expression(Meta.id))\n\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"hotel id -> ${it.getString(\"id\")}\")\n        println(\"hotel name -> ${it.getString(\"name\")}\")\n    }\n}\n

        The Query.execute() statement returns one or more key-value pairs, one for each SelectResult expression, with the property-name as the key \u2014 see Example 5.

        Example 5. Select Result Format

        [\n  { // The result for the first document matching the query criteria.\n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { // The result for the next document matching the query criteria.\n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\"\n  }\n]\n

        See Result Sets for more on processing query results.

        "},{"location":"query-builder/#where-statement","title":"WHERE statement","text":"

        In this section Comparison Operators | Collection Operators | Like Operator | Regex Operator | Deleted Document

        Like SQL, you can use the WHERE statement to choose which documents are returned by your query. The where() statement takes in an Expression. You can chain any number of Expressions in order to implement sophisticated filtering capabilities.

        "},{"location":"query-builder/#comparison-operators","title":"Comparison Operators","text":"

        The Expression Comparators can be used in the WHERE statement to specify on which property to match documents. In the example below, we use the equalTo operator to query documents where the type property equals \"hotel\".

        [\n  { \n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { \n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\"\n  }\n]\n

        Example 6. Using Where

        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n    .limit(Expression.intValue(10))\n\nquery.execute().use { rs ->\n    rs.forEach { result ->\n        result.getDictionary(\"myDatabase\")?.let {\n            println(\"name -> ${it.getString(\"name\")}\")\n            println(\"type -> ${it.getString(\"type\")}\")\n        }\n    }\n}\n
        "},{"location":"query-builder/#collection-operators","title":"Collection Operators","text":"

        ArrayFunction Collection Operators are useful to check if a given value is present in an array.

        "},{"location":"query-builder/#contains-operator","title":"CONTAINS Operator","text":"

        The following example uses the ArrayFunction to find documents where the public_likes array property contains a value equal to \"Armani Langworth\".

        {\n    \"_id\": \"hotel123\",\n    \"name\": \"Apple Droid\",\n    \"public_likes\": [\"Armani Langworth\", \"Elfrieda Gutkowski\", \"Maureen Ruecker\"]\n}\n

        Example 7. Using the CONTAINS operator

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"name\"),\n        SelectResult.property(\"public_likes\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"hotel\"))\n            .and(\n                ArrayFunction.contains(\n                    Expression.property(\"public_likes\"),\n                    Expression.string(\"Armani Langworth\")\n                )\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"public_likes -> ${it.getArray(\"public_likes\")?.toList()}\")\n    }\n}\n
        "},{"location":"query-builder/#in-operator","title":"IN Operator","text":"

        The IN operator is useful when you need to explicitly list out the values to test against. The following example looks for documents whose first, last, or username property value equals \"Armani\".

        Example 8. Using the IN operator

        val query = QueryBuilder.select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.string(\"Armani\").`in`(\n            Expression.property(\"first\"),\n            Expression.property(\"last\"),\n            Expression.property(\"username\")\n        )\n    )\n\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"public_likes -> ${it.toMap()}\")\n    }\n}\n
        "},{"location":"query-builder/#like-operator","title":"Like Operator","text":"

        In this section String Matching | Wildcard Match | Wildcard Character Match

        "},{"location":"query-builder/#string-matching","title":"String Matching","text":"

        The like() operator can be used for string matching \u2014 see Example 9.

        Note

        The like operator performs case sensitive matches. To perform case insensitive matching, use Function.lower or Function.upper to ensure all comparators have the same case, thereby removing the case issue.

        This query returns landmark type documents where the name matches the string \"Royal Engineers Museum\", regardless of how it is capitalized (so, it selects \"royal engineers museum\", \"ROYAL ENGINEERS MUSEUM\" and so on).

        Example 9. Like with case-insensitive matching

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .like(Expression.string(\"royal engineers museum\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n

        Note the use of Function.lower() to transform name values to the same case as the literal comparator.

        "},{"location":"query-builder/#wildcard-match","title":"Wildcard Match","text":"

        We can use % sign within a like expression to do a wildcard match against zero or more characters. Using wildcards allows you to have some fuzziness in your search string.

        In Example 10 below, we are looking for documents of type \"landmark\" where the name property matches any string that begins with \"eng\" followed by zero or more characters, the letter \"e\", followed by zero or more characters. Once again, we are using Function.lower() to make the search case-insensitive.

        So the query returns \"landmark\" documents with names such as \"Engineers\", \"engine\", \"english egg\" and \"England Eagle\". Notice that the matches may span word boundaries.

        Example 10. Wildcard Matches

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .like(Expression.string(\"eng%e%\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n
        "},{"location":"query-builder/#wildcard-character-match","title":"Wildcard Character Match","text":"

        We can use an _ sign within a like expression to do a wildcard match against a single character.

        In Example 11 below, we are looking for documents of type \"landmark\" where the name property matches any string that begins with \"eng\" followed by exactly 4 wildcard characters and ending in the letter \"r\". The query returns \"landmark\" type documents with names such as \"Engineer\", \"engineer\" and so on.

        Example 11. Wildcard Character Matching

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .like(Expression.string(\"eng____r\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n
        "},{"location":"query-builder/#regex-operator","title":"Regex Operator","text":"

        Similar to the wildcards in like expressions, regex based pattern matching allow you to introduce an element of fuzziness in your search string \u2014 see the code shown in Example 12.

        Note

        The regex operator is case sensitive, use upper or lower functions to mitigate this if required.

        Example 12. Using Regular Expressions

        This example returns documents with a type of \"landmark\" and a name property that matches any string that begins with \"eng\" and ends in the letter \"e\".

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .regex(Expression.string(\"\\\\beng.*r\\\\b\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n

        The \\b specifies that the match must occur on word boundaries.

        Tip

        For more on the regex spec used by Couchbase Lite see cplusplus regex reference page.

        "},{"location":"query-builder/#deleted-document","title":"Deleted Document","text":"

        You can query documents that have been deleted (tombstones) as shown in Example 13.

        Example 13. Query to select Deleted Documents

        This example shows how to query deleted documents in the database. It returns is an array of key-value pairs.

        // Query documents that have been deleted\nval query = QueryBuilder\n    .select(SelectResult.expression(Meta.id))\n    .from(DataSource.collection(collection))\n    .where(Meta.deleted)\n
        "},{"location":"query-builder/#join-statement","title":"JOIN statement","text":"

        The JOIN clause enables you to select data from multiple documents that have been linked by criteria specified in the JOIN statement. For example to combine airline details with route details, linked by the airline id \u2014 see Example 14 .

        Example 14. Using JOIN to Combine Document Details

        This example JOINS the document of type \"route\" with documents of type \"airline\" using the document ID (_id) on the airline document and airlineid on the route document.

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Expression.property(\"name\").from(\"airline\")),\n        SelectResult.expression(Expression.property(\"callsign\").from(\"airline\")),\n        SelectResult.expression(Expression.property(\"destinationairport\").from(\"route\")),\n        SelectResult.expression(Expression.property(\"stops\").from(\"route\")),\n        SelectResult.expression(Expression.property(\"airline\").from(\"route\"))\n    )\n    .from(DataSource.collection(airlineCollection).`as`(\"airline\"))\n    .join(\n        Join.join(DataSource.collection(routeCollection).`as`(\"route\"))\n            .on(\n                Meta.id.from(\"airline\")\n                    .equalTo(Expression.property(\"airlineid\").from(\"route\"))\n            )\n    )\n    .where(\n        Expression.property(\"type\").from(\"route\").equalTo(Expression.string(\"route\"))\n            .and(\n                Expression.property(\"type\").from(\"airline\")\n                    .equalTo(Expression.string(\"airline\"))\n            )\n            .and(\n                Expression.property(\"sourceairport\").from(\"route\")\n                    .equalTo(Expression.string(\"RIX\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.toMap()}\")\n    }\n}\n
        "},{"location":"query-builder/#group-by-statement","title":"GROUP BY statement","text":"

        You can perform further processing on the data in your result set before the final projection is generated.

        The following example looks for the number of airports at an altitude of 300 ft or higher and groups the results by country and timezone.

        Data Model for Example
        {\n    \"_id\": \"airport123\",\n    \"type\": \"airport\",\n    \"country\": \"United States\",\n    \"geo\": { \"alt\": 456 },\n    \"tz\": \"America/Anchorage\"\n}\n

        Example 15. Query using GroupBy

        This example shows a query that selects all airports with an altitude above 300ft. The output (a count, $1) is grouped by country, within timezone.

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Function.count(Expression.string(\"*\"))),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"tz\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"airport\"))\n            .and(Expression.property(\"geo.alt\").greaterThanOrEqualTo(Expression.intValue(300)))\n    )\n    .groupBy(\n        Expression.property(\"country\"), Expression.property(\"tz\")\n    )\n    .orderBy(Ordering.expression(Function.count(Expression.string(\"*\"))).descending())\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\n            \"There are ${it.getInt(\"$1\")} airports on the ${\n                it.getString(\"tz\")\n            } timezone located in ${\n                it.getString(\"country\")\n            } and above 300ft\"\n        )\n    }\n}\n

        The query shown in Example 15 generates the following output:

        There are 138 airports on the Europe/Paris timezone located in France and above 300 ft There are 29 airports on the Europe/London timezone located in United Kingdom and above 300 ft There are 50 airports on the America/Anchorage timezone located in United States and above 300 ft There are 279 airports on the America/Chicago timezone located in United States and above 300 ft There are 123 airports on the America/Denver timezone located in United States and above 300 ft

        "},{"location":"query-builder/#order-by-statement","title":"ORDER BY statement","text":"

        It is possible to sort the results of a query based on a given expression result \u2014 see Example 16.

        Example 16. Query using OrderBy

        This example shows a query that returns documents of type equal to \"hotel\" sorted in ascending order by the value of the title property.

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n    .orderBy(Ordering.property(\"name\").ascending())\n    .limit(Expression.intValue(10))\n\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"${it.toMap()}\")\n    }\n}\n

        The query shown in Example 16 generates the following output:

        Aberdyfi Achiltibuie Altrincham Ambleside Annan Ard\u00e8che Armagh Avignon

        "},{"location":"query-builder/#datetime-functions","title":"Date/Time Functions","text":"

        Couchbase Lite documents support a date type that internally stores dates in ISO 8601 with the GMT/UTC timezone.

        Couchbase Lite\u2019s Query Builder API includes four functions for date comparisons.

        Function.stringToMillis(Expression.property(\"date_time\")) The input to this will be a validly formatted ISO 8601 date_time string. The end result will be an expression (with a numeric content) that can be further input into the query builder.

        Function.stringToUTC(Expression.property(\"date_time\")) The input to this will be a validly formatted ISO 8601 date_time string. The end result will be an expression (with string content) that can be further input into the query builder.

        Function.millisToString(Expression.property(\"date_time\")) The input for this is a numeric value representing milliseconds since the Unix epoch. The end result will be an expression (with string content representing the date and time as an ISO 8601 string in the device\u2019s timezone) that can be further input into the query builder.

        Function.millisToUTC(Expression.property(\"date_time\")) The input for this is a numeric value representing milliseconds since the Unix epoch. The end result will be an expression (with string content representing the date and time as a UTC ISO 8601 string) that can be further input into the query builder.

        "},{"location":"query-builder/#result-sets","title":"Result Sets","text":"

        In this section Processing | Select All Properties | Select Specific Properties | Select Document ID Only | Select Count-only | Handling Pagination

        "},{"location":"query-builder/#processing","title":"Processing","text":"

        This section shows how to handle the returned result sets for different types of SELECT statements.

        The result set format and its handling varies slightly depending on the type of SelectResult statements used. The result set formats you may encounter include those generated by:

        • SelectResult.all() \u2014 see All Properties
        • SelectResult.property(\"name\") \u2014 see Specific Properties
        • SelectResult.expression(Meta.id) \u2014 Metadata (such as the _id) \u2014 see Document ID Only
        • SelectResult.expression(Function.count(Expression.all())).as(\"mycount\") \u2014 see Select Count-only

        To process the results of a query, you first need to execute it using Query.execute().

        The execution of a Kotbase database query typically returns an array of results, a result set.

        • The result set of an aggregate, count-only, query is a key-value pair \u2014 see Select Count-only \u2014 which you can access using the count name as its key.
        • The result set of a query returning document properties is an array. Each array row represents the data from a document that matched your search criteria (the WHERE statements). The composition of each row is determined by the combination of SelectResult expressions provided in the SELECT statement. To unpack these result sets you need to iterate this array.
        "},{"location":"query-builder/#select-all-properties","title":"Select All Properties","text":""},{"location":"query-builder/#query","title":"Query","text":"

        The Select statement for this type of query, returns all document properties for each document matching the query criteria \u2014 see Example 17.

        Example 17. Query selecting All Properties

        val query = QueryBuilder.select(SelectResult.all())\n    .from(DataSource.collection(collection))\n
        "},{"location":"query-builder/#result-set-format","title":"Result Set Format","text":"

        The result set returned by queries using SelectResult.all() is an array of dictionary objects \u2014 one for each document matching the query criteria.

        For each result object, the key is the database name and the value is a dictionary representing each document property as a key-value pair \u2014 see Example 18.

        Example 18. Format of Result Set (All Properties)

        [\n  {\n    \"travel-sample\": { // The result for the first document matching the query criteria.\n      \"callsign\": \"MILE-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"Q5\",\n      \"icao\": \"MLA\",\n      \"id\": 10,\n      \"name\": \"40-Mile Air\",\n      \"type\": \"airline\"\n    }\n  },\n  {\n    \"travel-sample\": { // The result for the next document matching the query criteria.\n      \"callsign\": \"ALASKAN-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"AA\",\n      \"icao\": \"AAA\",\n      \"id\": 10,\n      \"name\": \"Alaskan Airways\",\n      \"type\": \"airline\"\n    }\n  }\n]\n
        "},{"location":"query-builder/#result-set-access","title":"Result Set Access","text":"

        In this case access the retrieved document properties by converting each row\u2019s value, in turn, to a dictionary \u2014 as shown in Example 19.

        Example 19. Using Document Properties (All)

        val hotels = mutableMapOf<String, Hotel>()\nquery.execute().use { rs ->\n    rs.allResults().forEach {\n        // get the k-v pairs from the 'hotel' key's value into a dictionary\n        val docProps = it.getDictionary(0) \n        val docId = docProps!!.getString(\"id\")\n        val docName = docProps.getString(\"name\")\n        val docType = docProps.getString(\"type\")\n        val docCity = docProps.getString(\"city\")\n\n        // Alternatively, access results value dictionary directly\n        val id = it.getDictionary(0)?.getString(\"id\")!!\n        hotels[id] = Hotel(\n            id,\n            it.getDictionary(0)?.getString(\"type\"),\n            it.getDictionary(0)?.getString(\"name\"),\n            it.getDictionary(0)?.getString(\"city\"),\n            it.getDictionary(0)?.getString(\"country\"),\n            it.getDictionary(0)?.getString(\"description\")\n        )\n    }\n}\n
        "},{"location":"query-builder/#select-specific-properties","title":"Select Specific Properties","text":""},{"location":"query-builder/#query_1","title":"Query","text":"

        Here we use SelectResult.property(\"<property-name>\") to specify the document properties we want our query to return \u2014 see Example 20.

        Example 20. Query selecting Specific Properties

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n
        "},{"location":"query-builder/#result-set-format_1","title":"Result Set Format","text":"

        The result set returned when selecting only specific document properties is an array of dictionary objects \u2014 one for each document matching the query criteria.

        Each result object comprises a key-value pair for each selected document property \u2014 see Example 21.

        Example 21. Format of Result Set (Specific Properties)

        [\n  { // The result for the first document matching the query criteria.\n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { // The result for the next document matching the query criteria.\n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\",\n  }\n]\n
        "},{"location":"query-builder/#result-set-access_1","title":"Result Set Access","text":"

        Access the retrieved properties by converting each row into a dictionary \u2014 as shown in Example 22.

        Example 22. Using Returned Document Properties (Specific Properties)

        query.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"Hotel name -> ${it.getString(\"name\")}, in ${it.getString(\"country\")}\")\n    }\n}\n
        "},{"location":"query-builder/#select-document-id-only","title":"Select Document ID Only","text":""},{"location":"query-builder/#query_2","title":"Query","text":"

        You would typically use this type of query if retrieval of document properties directly would consume excessive amounts of memory and-or processing time \u2014 see Example 23.

        Example 23. Query selecting only Doc ID

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id).`as`(\"hotelId\")\n    )\n    .from(DataSource.collection(collection))\n
        "},{"location":"query-builder/#result-set-format_2","title":"Result Set Format","text":"

        The result set returned by queries using a SelectResult expression of the form SelectResult.expression(Meta.id) is an array of dictionary objects \u2014 one for each document matching the query criteria. Each result object has id as the key and the ID value as its value \u2014 see Example 24.

        Example 24. Format of Result Set (Doc ID only)

        [\n  {\n    \"id\": \"hotel123\"\n  },\n  {\n    \"id\": \"hotel456\"\n  }\n]\n
        "},{"location":"query-builder/#result-set-access_2","title":"Result Set Access","text":"

        In this case, access the required document\u2019s properties by unpacking the id and using it to get the document from the database \u2014 see Example 25.

        Example 25. Using Returned Document Properties (Document ID)

        query.execute().use { rs ->\n    rs.allResults().forEach {\n        // Extract the ID value from the dictionary\n        it.getString(\"hotelId\")?.let { hotelId ->\n            println(\"hotel id -> $hotelId\")\n            // use the ID to get the document from the database\n            val doc = collection.getDocument(hotelId)\n        }\n    }\n}\n
        "},{"location":"query-builder/#select-count-only","title":"Select Count-only","text":""},{"location":"query-builder/#query_3","title":"Query","text":"

        Example 26. Query selecting a Count-only

        val query = QueryBuilder\n    .select(\n        SelectResult.expression(Function.count(Expression.string(\"*\"))).`as`(\"mycount\")\n    ) \n    .from(DataSource.collection(collection))\n

        The alias name, mycount, is used to access the count value.

        "},{"location":"query-builder/#result-set-format_3","title":"Result Set Format","text":"

        The result set returned by a count such as Select.expression(Function.count(Expression.all))) is a key-value pair. The key is the count name, as defined using SelectResult.as() \u2014 see Example 27 for the format and Example 26 for the query.

        Example 27. Format of Result Set (Count)

        {\n  \"mycount\": 6\n}\n

        The key-value pair returned by a count.

        "},{"location":"query-builder/#result-set-access_3","title":"Result Set Access","text":"

        Access the count using its alias name (mycount in this example) \u2014 see Example 28.

        Example 28. Using Returned Document Properties (Count)

        query.execute().use { rs ->\n    rs.allResults().forEach {\n        printlnt(\"name -> ${it.getInt(\"mycount\")}\")\n    }\n}\n

        Get the count using the SelectResult.as() alias, which is used as its key.

        "},{"location":"query-builder/#handling-pagination","title":"Handling Pagination","text":"

        One way to handle pagination in high-volume queries is to retrieve the results in batches. Use the limit and offset feature, to return a defined number of results starting from a given offset \u2014 see Example 29.

        Example 29. Query Pagination

        val thisOffset = 0\nval thisLimit = 20\nval query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .limit(\n        Expression.intValue(thisLimit),\n        Expression.intValue(thisOffset)\n    ) \n

        Return a maximum of limit results starting from result number offset.

        Tip

        The Kotbase paging extensions provide a PagingSource to use with AndroidX Paging to assist loading and displaying pages of data in your app.

        Tip

        For more on using the QueryBuilder API, see our blog: Introducing the Query Interface in Couchbase Mobile

        "},{"location":"query-builder/#json-result-sets","title":"JSON Result Sets","text":"

        Kotbase provides a convenience API to convert query results to JSON strings.

        Use Result.toJSON() to transform your result into a JSON string, which can easily be serialized or used as required in your application. See Example 30 for a working example using kotlinx-serialization.

        Example 30. Using JSON Results

        // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval hotels = mutableListOf<Hotel>()\n\nval query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"type\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n\nquery.execute().use { rs ->\n    rs.forEach {\n\n        // Get result as JSON string\n        val json = it.toJSON()\n\n        // Get JsonObject map from JSON string\n        val mapFromJsonString = Json.decodeFromString<JsonObject>(json)\n\n        // Use created JsonObject map\n        val hotelId = mapFromJsonString[\"id\"].toString()\n        val hotelType = mapFromJsonString[\"type\"].toString()\n        val hotelName = mapFromJsonString[\"name\"].toString()\n\n        // Get custom object from JSON string\n        val hotel = Json.decodeFromString<Hotel>(json)\n        hotels.add(hotel)\n    }\n}\n
        "},{"location":"query-builder/#json-string-format","title":"JSON String Format","text":"

        If your query selects ALL then the JSON format will be:

        {\n  database-name: {\n    key1: \"value1\",\n    keyx: \"valuex\"\n  }\n}\n

        If your query selects a sub-set of available properties then the JSON format will be:

        {\n  key1: \"value1\",\n  keyx: \"valuex\"\n}\n
        "},{"location":"query-builder/#predictive-query","title":"Predictive Query","text":"

        This is an Enterprise Edition feature.

        Predictive Query enables Couchbase Lite queries to use machine learning, by providing query functions that can process document data (properties or blobs) via trained ML models.

        Let\u2019s consider an image classifier model that takes a picture as input and outputs a label and probability.

        To run a predictive query with a model as the one shown above, you must implement the following steps:

        1. Integrate the Model
        2. Register the Model
        3. Create an Index (Optional)
        4. Run a Prediction Query
        5. Deregister the Model
        "},{"location":"query-builder/#integrate-the-model","title":"Integrate the Model","text":"

        To integrate a model with Couchbase Lite, you must implement the PredictiveModel interface which has only one function called predict() \u2014 see Example 31.

        Example 31. Integrating a predictive model

        // tensorFlowModel is a fake implementation\nobject TensorFlowModel {\n    fun predictImage(data: ByteArray?): Map<String, Any?> = TODO()\n}\n\nobject ImageClassifierModel : PredictiveModel {\n    const val name = \"ImageClassifier\"\n\n    // this would be the implementation of the ml model you have chosen\n    override fun predict(input: Dictionary) = input.getBlob(\"photo\")?.let {\n        MutableDictionary(TensorFlowModel.predictImage(it.content)) \n    }\n}\n

        The predict(input) -> output method provides the input and expects the result of using the machine learning model. The input and output of the predictive model is a Dictionary. Therefore, the supported data type will be constrained by the data type that the Dictionary supports.

        "},{"location":"query-builder/#register-the-model","title":"Register the Model","text":"

        To register the model you must create a new instance and pass it to the Database.prediction.registerModel() static method.

        Example 32. Registering a predictive model

        Database.prediction.registerModel(\"ImageClassifier\", ImageClassifierModel)\n
        "},{"location":"query-builder/#create-an-index","title":"Create an Index","text":"

        Creating an index for a predictive query is highly recommended. By computing the predictions during writes and building a prediction index, you can significantly improve the speed of prediction queries (which would otherwise have to be computed during reads).

        There are two types of indexes for predictive queries:

        • Value Index
        • Predictive Index
        "},{"location":"query-builder/#value-index","title":"Value Index","text":"

        The code below creates a value index from the \"label\" value of the prediction result. When documents are added or updated, the index will call the prediction function to update the label value in the index.

        Example 33. Creating a value index

        database.createIndex(\n    \"value-index-image-classifier\",\n    IndexBuilder.valueIndex(ValueIndexItem.expression(Expression.property(\"label\")))\n)\n
        "},{"location":"query-builder/#predictive-index","title":"Predictive Index","text":"

        Predictive Index is a new index type used for predictive query. It differs from the value index in that it caches the predictive results and creates a value index from that cache when the predictive results values are specified.

        Example 34. Creating a predictive index

        Here we create a predictive index from the label value of the prediction result.

        val inputMap: Map<String, Any?> = mapOf(\"numbers\" to Expression.property(\"photo\"))\ncollection.createIndex(\n    \"predictive-index-image-classifier\",\n    IndexBuilder.predictiveIndex(\"ImageClassifier\", Expression.map(inputMap), null)\n)\n
        "},{"location":"query-builder/#run-a-prediction-query","title":"Run a Prediction Query","text":"

        The code below creates a query that calls the prediction function to return the \"label\" value for the first 10 results in the database.

        Example 35. Creating a value index

        val inputMap: Map<String, Any?> = mapOf(\"photo\" to Expression.property(\"photo\"))\nval prediction: PredictionFunction = Function.prediction(\n    ImageClassifierModel.name,\n    Expression.map(inputMap)\n)\n\nval query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        prediction.propertyPath(\"label\").equalTo(Expression.string(\"car\"))\n            .and(\n                prediction.propertyPath(\"probability\")\n                    .greaterThanOrEqualTo(Expression.doubleValue(0.8))\n            )\n    )\n\nquery.execute().use {\n    println(\"Number of rows: ${it.allResults().size}\")\n}\n

        The PredictiveModel.predict() method returns a constructed PredictionFunction object which can be used further to specify a property value extracted from the output dictionary of the PredictiveModel.predict() function.

        Note

        The null value returned by the prediction method will be interpreted as MISSING value in queries.

        "},{"location":"query-builder/#deregister-the-model","title":"Deregister the Model","text":"

        To deregister the model you must call the Database.prediction.unregisterModel() static method.

        Example 36. Deregister a value index

        Database.prediction.unregisterModel(\"ImageClassifier\")\n
        "},{"location":"query-result-sets/","title":"Query Result Sets","text":"

        How to use Couchbase Lite Query\u2019s Result Sets

        "},{"location":"query-result-sets/#query-execution","title":"Query Execution","text":"

        The execution of a Couchbase Lite database query returns an array of results, a result set.

        Each row of the result set represents the data returned from a document that met the conditions defined by the WHERE statement of your query. The composition of each row is determined by the SelectResult expressions provided in the SELECT statement.

        "},{"location":"query-result-sets/#returned-results","title":"Returned Results","text":"

        Return All Document Properties | Return Document ID Only | Return Specific Properties Only

        The types of SelectResult formats you may encounter include those generated by :

        • QueryBuilder.select(SelectResult.all()) \u2014 Using All
        • QueryBuilder.select(SelectResult.expression(Meta.id)) \u2014 Using Doc ID Metadata such as the _id
        • QueryBuilder.select(SelectResult.property(\"myProp\")) \u2014 Using Specific Properties
        "},{"location":"query-result-sets/#return-all-document-properties","title":"Return All Document Properties","text":"

        The SelectResult returned by SelectResult.all() is a dictionary object, with the database name as the key and the document properties as an array of key-value pairs.

        Example 1. Returning All Properties

        [\n  {\n    \"travel-sample\": { \n      \"callsign\": \"MILE-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"Q5\",\n      \"icao\": \"MLA\",\n      \"id\": 10,\n      \"name\": \"40-Mile Air\",\n      \"type\": \"airline\"\n    }\n  },\n  {\n    \"travel-sample\": { \n      \"callsign\": \"ALASKAN-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"AA\",\n      \"icao\": \"AAA\",\n      \"id\": 10,\n      \"name\": \"Alaskan Airways\",\n      \"type\": \"airline\"\n    }\n  }\n]\n
        "},{"location":"query-result-sets/#return-document-id-only","title":"Return Document ID Only","text":"

        The SelectResult returned by queries using a SelectResult expression of the form SelectResult.expression(Meta.id) comprises a dictionary object with id as the key and the ID value as the value.

        Example 2. Returning Meta Properties \u2014 Document ID

        [\n  {\n    \"id\": \"hotel123\"\n  },\n  {\n    \"id\": \"hotel456\"\n  }\n]\n
        "},{"location":"query-result-sets/#return-specific-properties-only","title":"Return Specific Properties Only","text":"

        The SelectResult returned by queries using one or more SelectResult expressions of the form SelectResult.expression(property(\"name\")) comprises a key-value pair for each SelectResult expression in the query, the key being the property name.

        Example 3. Returning Specific Properties

        [\n  { \n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { \n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\",\n  }\n]\n
        "},{"location":"query-result-sets/#processing-results","title":"Processing Results","text":"

        Access Document Properties \u2014 All Properties | Access Document Properties \u2014 ID | Access Document Properties \u2014 Selected Properties

        To retrieve the results of your query, you need to execute it using Query.execute().

        The output from the execution is an array, with each array element representing the data from a document that matched your search criteria.

        To unpack the results you need to iterate through this array. Alternatively, you can convert the result to a JSON string \u2014 see: JSON Result Sets

        "},{"location":"query-result-sets/#access-document-properties-all-properties","title":"Access Document Properties - All Properties","text":"

        Here we look at how to access document properties when you have used SelectResult.all().

        In this case each array element is a dictionary structure with the database name as its key. The properties are presented in the value as an array of key-value pairs (property name/property value).

        You access the retrieved document properties by converting each row\u2019s value, in turn, to a dictionary \u2014 as shown in Example 4.

        Example 4. Access All Properties

        val hotels = mutableMapOf<String, Hotel>()\nquery.execute().use { rs ->\n    rs.allResults().forEach {\n        // get the k-v pairs from the 'hotel' key's value into a dictionary\n        val docProps = it.getDictionary(0)\n        val docId = docProps!!.getString(\"id\")\n        val docType = docProps.getString(\"type\")\n        val docName = docProps.getString(\"name\")\n        val docCity = docProps.getString(\"city\")\n\n        // Alternatively, access results value dictionary directly\n        val id = it.getDictionary(0)?.getString(\"id\")\n        hotels[id] = Hotel(\n            id,\n            it.getDictionary(0)?.getString(\"type\"),\n            it.getDictionary(0)?.getString(\"name\"),\n            it.getDictionary(0)?.getString(\"city\"),\n            it.getDictionary(0)?.getString(\"country\"),\n            it.getDictionary(0)?.getString(\"description\")\n        )\n    }\n}\n
        "},{"location":"query-result-sets/#access-document-properties-id","title":"Access Document Properties - ID","text":"

        Here we look at how to access document properties when you have returned only the document IDs for documents that matched your selection criteria.

        This is something you may do when retrieval of the properties directly by the query may consume excessive amounts of memory and-or processing time.

        In this case each array element is a dictionary structure where id is the key and the required document ID is the value.

        Access the required document properties by retrieving the document from the database using its document ID \u2014 as shown in Example 5.

        Example 5. Access by ID

        query.execute().use { rs ->\n    rs.allResults().forEach {\n        // Extract the ID value from the dictionary\n        it.getString(\"id\")?.let { hotelId ->\n            println(\"hotel id -> $hotelId\")\n            // use the ID to get the document from the database\n            val doc = collection.getDocument(hotelId)\n        }\n    }\n}\n
        "},{"location":"query-result-sets/#access-document-properties-selected-properties","title":"Access Document Properties - Selected Properties","text":"

        Here we look at how to access properties when you have used SelectResult to get a specific subset of properties.

        In this case each array element is an array of key value pairs (property name/property value).

        Access the retrieved properties by converting each row into a dictionary \u2014 as shown in Example 6.

        Example 6. Access Selected Properties

        query.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"Hotel name -> ${it.getString(\"name\")}, in ${it.getString(\"country\")}\")\n    }\n}\n
        "},{"location":"query-result-sets/#json-result-sets","title":"JSON Result Sets","text":"

        Use Result.toJSON() to transform your result into a JSON string, which can easily be serialized or used as required in your application. See Example 7 for a working example using kotlinx-serialization.

        Example 7. Using JSON Results

        // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval hotels = mutableListOf<Hotel>()\n\nval query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"type\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n\nquery.execute().use { rs ->\n    rs.forEach {\n\n        // Get result as JSON string\n        val json = it.toJSON()\n\n        // Get JsonObject map from JSON string\n        val mapFromJsonString = Json.decodeFromString<JsonObject>(json)\n\n        // Use created JsonObject map\n        val hotelId = mapFromJsonString[\"id\"].toString()\n        val hotelType = mapFromJsonString[\"type\"].toString()\n        val hotelName = mapFromJsonString[\"name\"].toString()\n\n        // Get custom object from JSON string\n        val hotel = Json.decodeFromString<Hotel>(json)\n        hotels.add(hotel)\n    }\n}\n
        "},{"location":"query-result-sets/#json-string-format","title":"JSON String Format","text":"

        If your query selects ALL then the JSON format will be:

        {\n  database-name: {\n    key1: \"value1\",\n    keyx: \"valuex\"\n  }\n}\n

        If your query selects a sub-set of available properties then the JSON format will be:

        {\n  key1: \"value1\",\n  keyx: \"valuex\"\n}\n
        "},{"location":"query-troubleshooting/","title":"Query Troubleshooting","text":"

        How to use the Couchbase Lite Query API\u2019s explain() method to examine a query

        "},{"location":"query-troubleshooting/#query-explain","title":"Query Explain","text":""},{"location":"query-troubleshooting/#using","title":"Using","text":"

        Query\u2019s explain() method can provide useful insight when you are trying to diagnose query performance issues and-or optimize queries. To examine how your query is working, either embed the call inside your app (see Example 1), or use it interactively within a cblite shell (see Example 2).

        Example 1. Using Query Explain in App

        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"university\")))\n    .groupBy(Expression.property(\"country\"))\n    .orderBy(Ordering.property(\"name\").descending()) \n\nprintln(query.explain())\n
        1. Construct your query as normal
        2. Call the query\u2019s explain method; all output is sent to the application\u2019s console log.

        Example 2. Using Query Explain in cblite

        cblite <your-database-name>.cblite2 \n\n(cblite) select --explain domains group by country order by country, name \n\n(cblite) query --explain {\"GROUP_BY\":[[\".country\"]],\"ORDER_BY\":[[\".country\"],[\".name\"]],\"WHAT\":[[\".domains\"]]} \n
        1. Within a terminal session open your database with cblite and enter your query
        2. Here the query is entered as a N1QL-query using select
        3. Here the query is entered as a JSON-string using query
        "},{"location":"query-troubleshooting/#output","title":"Output","text":"

        The output from explain() remains the same whether invoked by an app, or cblite\u2014see Example 3 for an example of how it looks.

        Example 3. Query.explain() Output

        SELECT fl_result(fl_value(_doc.body, 'domains')) FROM kv_default AS _doc WHERE (_doc.flags & 1 = 0) GROUP BY fl_value(_doc.body, 'country') ORDER BY fl_value(_doc.body, 'country'), fl_value(_doc.body, 'name')\n\n7|0|0| SCAN TABLE kv_default AS _doc\n12|0|0| USE TEMP B-TREE FOR GROUP BY\n52|0|0| USE TEMP B-TREE FOR ORDER BY\n\n{\"GROUP_BY\":[[\".country\"]],\"ORDER_BY\":[[\".country\"],[\".name\"]],\"WHAT\":[[\".domains\"]]}\n

        This output (Example 3) comprises three main elements:

        1. The translated SQL-query, which is not necessarily useful, being aimed more at Couchbase support and-or engineering teams.
        2. The SQLite query plan, which gives a high-level view of how the SQL query will be implemented. You can use this to identify potential issues and so optimize problematic queries.
        3. The query in JSON-string format, which you can copy-and-paste directly into the cblite tool.
        "},{"location":"query-troubleshooting/#the-query-plan","title":"The Query Plan","text":""},{"location":"query-troubleshooting/#format","title":"Format","text":"

        The query plan section of the output displays a tabular form of the translated query\u2019s execution plan. It primarily shows how the data will be retrieved and, where appropriate, how it will be sorted for navigation and-or presentation purposes. For more on SQLite\u2019s Explain Query Plan \u2014 see SQLite Explain Query Plan.

        Example 4. A Query Plan

        7|0|0| SCAN TABLE kv_default AS _doc\n12|0|0| USE TEMP B-TREE FOR GROUP BY\n52|0|0| USE TEMP B-TREE FOR ORDER BY\n
        1. Retrieval method \u2014 This line shows the retrieval method being used for the query; here a sequential read of the database. Something you may well be looking to optimize \u2014 see Retrieval Method for more.
        2. Grouping method \u2014 This line shows that the Group By clause used in the query requires the data to be sorted and that a b-tree will be used for temporary storage \u2014 see Order and Group.
        3. Ordering method \u2014 This line shows that the Order By clause used in the query requires the data to be sorted and that a b-tree will be used for temporary storage \u2014 see Order and Group.
        "},{"location":"query-troubleshooting/#retrieval-method","title":"Retrieval Method","text":"

        The query optimizer will attempt to retrieve the requested data items as efficiently as possible, which generally will be by using one or more of the available indexes. The retrieval method shows the approach decided upon by the optimizer \u2014 see Table 1.

        Table 1. Retrieval methods

        Retrieval Method Description Search Here the query is able to access the required data directly using keys into the index. Queries using the Search mode are the fastest. Scan Index Here the query is able to retrieve the data by scanning all or part-of the index (for example when seeking to match values within a range). This type of query is slower than search, but at least benefits from the compact and ordered form of the index. Scan Table Here the query must scan the database table(s) to retrieve the required data. It is the slowest of these methods and will benefit most from some form of optimization.

        When looking to optimize a query\u2019s retrieval method, consider whether:

        • Providing an additional index makes sense
        • You could use an existing index \u2014 perhaps by restructuring the query to minimize wildcard use, or the reliance on functions that modify the query\u2019s interpretation of index keys (for example, lower())
        • You could reduce the data set being requested to minimize the query\u2019s footprint on the database
        "},{"location":"query-troubleshooting/#order-and-group","title":"Order and Group","text":"

        The Use temp b-tree for lines in the example indicate that the query requires sorting to cater for grouping and then sorting again to present the output results. Minimizing, if not eliminating, this ordering and re-ordering will obviously reduce the amount of time taken to process your query.

        Ask \"is the grouping and-or ordering absolutely necessary?\": if it isn\u2019t, drop it or modify it to minimize its impact.

        "},{"location":"query-troubleshooting/#queries-and-indexes","title":"Queries and Indexes","text":"

        Querying documents using a pre-existing database index is much faster because an index narrows down the set of documents to examine.

        When planning the indexes you need for your database, remember that while indexes make queries faster, they may also:

        • Make writes slightly slower, because each index must be updated whenever a document is updated
        • Make your Couchbase Lite database slightly larger.

        Too many indexes may hurt performance. Optimal performance depends on designing and creating the right indexes to go along with your queries.

        Constraints

        Couchbase Lite does not currently support partial value indexes; indexes with non-property expressions. You should only index with properties that you plan to use in the query.

        The query optimizer converts your query into a parse tree that groups zero or more and-connected clauses together (as dictated by your where conditionals) for effective query engine processing.

        Ideally a query will be able to satisfy its requirements entirely by either directly accessing the index or searching sequential index rows. Less good is if the query must scan the whole index; although the compact nature of most indexes means this is still much faster than the alternative of scanning the entire database with no help from the indexes at all.

        Searches that begin with or rely upon an inequality with the primary key are inherently less effective than those using a primary key equality.

        "},{"location":"query-troubleshooting/#working-with-the-query-optimizer","title":"Working with the Query Optimizer","text":"

        You may have noticed that sometimes a query runs faster on a second run, or after re-opening the database, or after deleting and recreating an index. This typically happens when SQL Query Optimizer has gathered sufficient stats to recognize a means of optimizing a suboptimal query.

        If only those stats were available from the start. In fact, they are gathered after certain events, such as:

        • Following index creation
        • On a database close
        • When running a database compact

        So, if your analysis of the Query Explain output indicates a suboptimal query and your rewrites fail to sufficiently optimize it, consider compacting the database. Then re-generate the Query Explain and note any improvements in optimization. They may not, in themselves, resolve the issue entirely; but they can provide a useful guide toward further optimizing changes you could make.

        "},{"location":"query-troubleshooting/#wildcard-and-like-based-queries","title":"Wildcard and Like-based Queries","text":"

        Like-based searches can use the index(es) only if:

        • The search-string doesn\u2019t start with a wildcard
        • The primary search expression uses a property that is an indexed key
        • The search-string is a constant known at run time (that is, not a value derived during processing of the query)

        To illustrate this we can use a modified query from the Mobile Travel Sample application; replacing a simple equality test with a LIKE.

        In Example 5 we use a wildcard prefix and suffix. You can see that the query plan decides on a retrieval method of Scan Table.

        Tip

        For more on indexes \u2014 see Indexing

        Example 5. Like with Wildcard Prefix

        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").like(Expression.string(\"%hotel%\"))\n            .and(Expression.property(\"name\").like(Expression.string(\"%royal%\")))\n    )\nprintln(query.explain())\n

        The indexed property, type, cannot use its index because of the wildcard prefix.

        Resulting Query Plan
        2|0|0| SCAN TABLE kv_default AS _doc\n

        By contrast, by removing the wildcard prefix % (in Example 6), we see that the query plan\u2019s retrieval method changes to become an index search. Where practical, simple changes like this can make significant differences in query performance.

        Example 6. Like with No Wildcard-prefix

        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").like(Expression.string(\"hotel%\"))\n            .and(Expression.property(\"name\").like(Expression.string(\"%royal%\")))\n    )\nprintln(query.explain())\n

        Simply removing the wildcard prefix enables the query optimizer to access the typeIndex, which results in a more efficient search.

        Resulting Query Plan
        3|0|0| SEARCH TABLE kv_default AS _doc USING INDEX typeIndex (<expr>>? AND <expr><?)\n
        "},{"location":"query-troubleshooting/#use-functions-wisely","title":"Use Functions Wisely","text":"

        Functions are a very useful tool in building queries, but be aware that they can impact whether the query-optimizer is able to use your index(es).

        For example, you can observe a similar situation to that shown in Wildcard and Like-based Queries when using the lower() function on an indexed property.

        Query
        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Function.lower(Expression.property(\"type\")).equalTo(Expression.string(\"hotel\")))\nprintln(query.explain())\n

        Here we use the lower() function in the Where expression

        Query Plan
        2|0|0| SCAN TABLE kv_default AS _doc\n

        But removing the lower() function, changes things:

        Query
        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\"))) \nprintln(query.explain())\n

        Here we have removed lower() from the Where expression

        Query Plan
        3|0|0| SEARCH TABLE kv_default AS _doc USING INDEX typeIndex (<expr>=?)\n

        Knowing this, you can consider how you create the index; for example, using lower() when you create the index and then always using lowercase comparisons.

        "},{"location":"query-troubleshooting/#optimization-considerations","title":"Optimization Considerations","text":"

        Try to minimize the amount of data retrieved. Reduce it down to the few properties you really do need to achieve the required result.

        Consider fetching details lazily. You could break complex queries into components. Returning just the doc-ids, then process the array of doc-ids using either the Document API or a query that uses the array of doc-ids to return information.

        Consider using paging to minimize the data returned when the number of results returned is expected to be high. Getting the whole lot at once will be slow and resource intensive. Plus does anyone want to access them all in one go? Instead, retrieve batches of information at a time, perhaps using the LIMIT/OFFSET feature to set a starting point for each subsequent batch. Although, note that using query offsets becomes increasingly less effective as the overhead of skipping a growing number of rows each time increases. You can work around this, by instead using ranges of search-key values. If the last search-key value of batch one was 'x' then that could become the starting point for your next batch and-so-on.

        Optimize document size in design. Smaller docs load more quickly. Break your data into logical linked units.

        Consider Using Full Text Search instead of complex like or regex patterns \u2014 see Full Text Search.

        "},{"location":"remote-sync-gateway/","title":"Remote Sync Gateway","text":"

        Couchbase Lite \u2014 Synchronizing data changes between local and remote databases using Sync Gateway

        Android enablers

        Allow Unencrypted Network Traffic

        To use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

        Use Background Threads

        As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

        Code Snippets

        All code examples are indicative only. They demonstrate the basic concepts and approaches to using a feature. Use them as inspiration and adapt these examples to best practice when developing applications for your platform.

        "},{"location":"remote-sync-gateway/#introduction","title":"Introduction","text":"

        Couchbase Lite provides API support for secure, bi-directional, synchronization of data changes between mobile applications and a central server database. It does so by using a replicator to interact with Sync Gateway.

        The replicator is designed to manage replication of documents and-or document changes between a source and a target database. For example, between a local Couchbase Lite database and remote Sync Gateway database, which is ultimately mapped to a bucket in a Couchbase Server instance in the cloud or on a server.

        This page shows sample code and configuration examples covering the implementation of a replication using Sync Gateway.

        Your application runs a replicator (also referred to here as a client), which will initiate connection with a Sync Gateway (also referred to here as a server) and participate in the replication of database changes to bring both local and remote databases into sync.

        Subsequent sections provide additional details and examples for the main configuration options.

        "},{"location":"remote-sync-gateway/#replication-concepts","title":"Replication Concepts","text":"

        Couchbase Lite allows for one database for each application running on the mobile device. This database can contain one or more scopes. Each scope can contain one or more collections.

        To learn about Scopes and Collections, see Databases.

        You can set up a replication scheme across these data levels:

        Database The _default collection is synced.

        Collection A specific collection or a set of collections is synced.

        As part of the syncing setup, the Sync Gateway has to map the Couchbase Lite database to the Couchbase Server or Capella database being synced.

        "},{"location":"remote-sync-gateway/#replication-protocol","title":"Replication Protocol","text":""},{"location":"remote-sync-gateway/#scheme","title":"Scheme","text":"

        Couchbase Mobile uses a replication protocol based on WebSockets for replication. To use this protocol the replication URL should specify WebSockets as the URL scheme (see the Configure Target section below).

        "},{"location":"remote-sync-gateway/#ordering","title":"Ordering","text":"

        To optimize for speed, the replication protocol doesn\u2019t guarantee that documents will be received in a particular order. So we don\u2019t recommend to rely on that when using the replication or database change listeners for example.

        "},{"location":"remote-sync-gateway/#scopes-and-collections","title":"Scopes and Collections","text":"

        Scopes and Collections allow you to organize your documents in Couchbase Lite.

        When syncing, you can configure the collections to be synced.

        The collections specified in the Couchbase Lite replicator setup must exist (both scope and collection name must be identical) on the Sync Gateway side, otherwise starting the Couchbase Lite replicator will result in an error.

        During replication:

        1. If Sync Gateway config (or server) is updated to remove a collection that is being synced, the client replicator will be offline and will be stopped after the first retry. An error will be reported.
        2. If Sync Gateway config is updated to add a collection to a scope that is being synchronized, the replication will ignore the collection. The added collection will not automatically sync until the Couchbase Lite replicator\u2019s configuration is updated.
        "},{"location":"remote-sync-gateway/#default-collection","title":"Default Collection","text":"

        When upgrading Couchbase Lite to 3.1, the existing documents in the database will be automatically migrated to the default collection.

        For backward compatibility with the code prior to 3.1, when you set up the replicator with the database, the default collection will be set up to sync with the default collection on Sync Gateway.

        Sync Couchbase Lite database with the default collection on Sync Gateway

        Sync Couchbase Lite default collection with default collection on Sync Gateway

        "},{"location":"remote-sync-gateway/#user-defined-collections","title":"User-Defined Collections","text":"

        The user-defined collections specified in the Couchbase Lite replicator setup must exist (and be identical) on the Sync Gateway side to sync.

        Syncing scope with user-defined collections

        Syncing scope with user-defined collections. Couchbase Lite has more collections than the Sync Gateway configuration (with collection filters)

        "},{"location":"remote-sync-gateway/#configuration-summary","title":"Configuration Summary","text":"

        You should configure and initialize a replicator for each Couchbase Lite database instance you want to sync. Example 1 shows the configuration and initialization process.

        Note

        You need Couchbase Lite 3.1+ and Sync Gateway 3.1+ to use custom Scopes and Collections. If you\u2019re using Capella App Services or Sync Gateway releases that are older than version 3.1, you won\u2019t be able to access custom Scopes and Collections. To use Couchbase Lite 3.1+ with these older versions, you can use the default Collection as a backup option.

        Example 1. Replication configuration and initialization

        val repl = Replicator(\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(collections to null),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // set auto-purge behavior\n        // (here we override default)\n        enableAutoPurge = false,\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Optionally add a change listener\nval token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code ::  ${err.code}\\n$err\")\n    }\n}\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n

        Notes on Example

        1. Get endpoint for target database.
        2. Use the ReplicatorConfiguration class\u2019s constructor \u2014 ReplicatorConfiguration(Endpoint) \u2014 to initialize the replicator configuration \u2014 see also Configure Target.
        3. The default is to auto-purge documents that this user no longer has access to \u2014 see Auto-purge on Channel Access Revocation. Here we override this behavior by setting its flag to false.
        4. Configure how the client will authenticate the server. Here we say connect only to servers presenting a self-signed certificate. By default, clients accept only servers presenting certificates that can be verified using the OS bundled Root CA Certificates \u2014 see Server Authentication.
        5. Configure the client-authentication credentials (if required). These are the credential the client will present to sync gateway if requested to do so. Here we configure to provide Basic Authentication credentials. Other options are available \u2014 see Client Authentication.
        6. Configure how the replication should handle conflict resolution \u2014 see Handling Data Conflicts topic for mor on conflict resolution.
        7. Initialize the replicator using your configuration \u2014 see Initialize.
        8. Optionally, register an observer, which will notify you of changes to the replication status \u2014 see Monitor .
        9. Start the replicator \u2014 see Start Replicator.
        "},{"location":"remote-sync-gateway/#configure","title":"Configure","text":"

        In this section Configure Target | Sync Mode | Retry Configuration | User Authorization | Server Authentication | Client Authentication | Monitor Document Changes | Custom Headers | Checkpoint Starts | Replication Filters | Channels | Auto-purge on Channel Access Revocation | Delta Sync

        "},{"location":"remote-sync-gateway/#configure-target","title":"Configure Target","text":"

        Initialize and define the replication configuration with local and remote database locations using the ReplicatorConfiguration object.

        The constructor provides the server\u2019s URL (including the port number and the name of the remote database to sync with).

        It is expected that the app will identify the IP address and URL and append the remote database name to the URL endpoint, producing for example: wss://10.0.2.2:4984/travel-sample.

        The URL scheme for web socket URLs uses ws: (non-TLS) or wss: (SSL/TLS) prefixes.

        Note

        On the Android platform, to use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

        Add the database collections to sync along with the CollectionConfiguration for each to the ReplicatorConfiguration. Multiple collections can use the same configuration, or each their own as needed. A null configuration will use the default configuration values, found in Defaults.Replicator.

        Example 2. Add Target to Configuration

        // initialize the replicator configuration\nval config = ReplicatorConfiguration(\n    URLEndpoint(\"wss://10.0.2.2:8954/travel-sample\")\n).addCollections(collections, null)\n

        Note use of the scheme prefix (wss:// to ensure TLS encryption \u2014 strongly recommended in production \u2014 or ws://)

        "},{"location":"remote-sync-gateway/#sync-mode","title":"Sync Mode","text":"

        Here we define the direction and type of replication we want to initiate.

        We use ReplicatorConfiguration class\u2019s type and isContinuous parameters, to tell the replicator:

        • The type (or direction) of the replication: PUSH_AND_PULL; PULL; PUSH
        • The replication mode, that is either of:
          • Continuous \u2014 remaining active indefinitely to replicate changed documents (isContinuous=true).
          • Ad-hoc \u2014 a one-shot replication of changed documents (isContinuous=false).

        Example 3. Configure replicator type and mode

        // Set replicator type\ntype = ReplicatorType.PUSH_AND_PULL,\n\n// Configure Sync Mode\ncontinuous = false, // default value\n

        Tip

        Unless there is a solid use-case not to, always initiate a single PUSH_AND_PULL replication rather than identical separate PUSH and PULL replications.

        This prevents the replications generating the same checkpoint docID resulting in multiple conflicts.

        "},{"location":"remote-sync-gateway/#retry-configuration","title":"Retry Configuration","text":"

        Couchbase Lite\u2019s replication retry logic assures a resilient connection.

        The replicator minimizes the chance and impact of dropped connections by maintaining a heartbeat; essentially pinging the Sync Gateway at a configurable interval to ensure the connection remains alive.

        In the event it detects a transient error, the replicator will attempt to reconnect, stopping only when the connection is re-established, or the number of retries exceeds the retry limit (9 times for a single-shot replication and unlimited for a continuous replication).

        On each retry the interval between attempts is increased exponentially (exponential backoff) up to the maximum wait time limit (5 minutes).

        The REST API provides configurable control over this replication retry logic using a set of configurable properties \u2014 see Table 1.

        Table 1. Replication Retry Configuration Properties

        Property Use cases Description setHeartbeat()
        • Reduce to detect connection errors sooner
        • Align to load-balancer or proxy keep-alive interval \u2014 see Sync Gateway\u2019s topic Load Balancer - Keep Alive
        The interval (in seconds) between the heartbeat pulses.Default: The replicator pings the Sync Gateway every 300 seconds. setMaxAttempts() Change this to limit or extend the number of retry attempts. The maximum number of retry attempts
        • Set to zero (0) to use default values
        • Set to one (1) to prevent any retry attempt
        • The retry attempt count is reset when the replicator is able to connect and replicate
        • Default values are:
          • Single-shot replication = 9;
          • Continuous replication = maximum integer value
        • Negative values generate a Couchbase exception InvalidArgumentException
        setMaxAttemptWaitTime() Change this to adjust the interval between retries. The maximum interval between retry attemptsWhile you can configure the maximum permitted wait time, the replicator\u2019s exponential backoff algorithm calculates each individual interval which is not configurable.
        • Default value: 300 seconds (5 minutes)
        • Zero sets the maximum interval between retries to the default of 300 seconds
        • 300 sets the maximum interval between retries to the default of 300 seconds
        • A negative value generates a Couchbase exception, InvalidArgumentException

        When necessary you can adjust any or all of those configurable values \u2014 see Example 4 for how to do this.

        Example 4. Configuring Replication Retries

        val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        //  other config params as required . .\n        heartbeat = 150, \n        maxAttempts = 20,\n        maxAttemptWaitTime = 600\n    )\n)\nrepl.start()\nthis.replicator = repl\n
        "},{"location":"remote-sync-gateway/#user-authorization","title":"User Authorization","text":"

        By default, Sync Gateway does not enable user authorization. This makes it easier to get up and running with synchronization.

        You can enable authorization in the sync gateway configuration file, as shown in Example 5.

        Example 5. Enable Authorization

        {\n  \"databases\": {\n    \"mydatabase\": {\n      \"users\": {\n        \"GUEST\": { \"disabled\": true }\n      }\n    }\n  }\n}\n

        To authorize with Sync Gateway, an associated user must first be created. Sync Gateway users can be created through the POST /{db}/_user endpoint on the Admin REST API.

        "},{"location":"remote-sync-gateway/#server-authentication","title":"Server Authentication","text":"

        Define the credentials your app (the client) is expecting to receive from the Sync Gateway (the server) in order to ensure it is prepared to continue with the sync.

        Note that the client cannot authenticate the server if TLS is turned off. When TLS is enabled (Sync Gateway\u2019s default) the client must authenticate the server. If the server cannot provide acceptable credentials then the connection will fail.

        Use ReplicatorConfiguration properties setAcceptOnlySelfSignedServerCertificate and setPinnedServerCertificate, to tell the replicator how to verify server-supplied TLS server certificates.

        • If there is a pinned certificate, nothing else matters, the server cert must exactly match the pinned certificate.
        • If there are no pinned certs and setAcceptOnlySelfSignedServerCertificate is true then any self-signed certificate is accepted. Certificates that are not self-signed are rejected, no matter who signed them.
        • If there are no pinned certificates and setAcceptOnlySelfSignedServerCertificate is false (default), the client validates the server\u2019s certificates against the system CA certificates. The server must supply a chain of certificates whose root is signed by one of the certificates in the system CA bundle.

        Example 6. Set Server TLS security

        CA CertSelf-Signed CertPinned Certificate

        Set the client to expect and accept only CA attested certificates.

        // Configure Server Security\n// -- only accept CA attested certs\nacceptOnlySelfSignedServerCertificate = false,\n

        This is the default. Only certificate chains with roots signed by a trusted CA are allowed. Self-signed certificates are not allowed.

        Set the client to expect and accept only self-signed certificates.

        // Configure Server Authentication --\n// only accept self-signed certs\nacceptOnlySelfSignedServerCertificate = true,\n

        Set this to true to accept any self-signed cert. Any certificates that are not self-signed are rejected.

        Set the client to expect and accept only a pinned certificate.

        // Use the pinned certificate from the byte array (cert)\npinnedServerCertificate = TLSIdentity.getIdentity(\"Our Corporate Id\")\n    ?.certs?.firstOrNull()\n    ?: throw IllegalStateException(\"Cannot find corporate id\"),\n

        Configure the pinned certificate using data from the byte array cert

        This all assumes that you have configured the Sync Gateway to provide the appropriate SSL certificates, and have included the appropriate certificate in your app bundle \u2014 for more on this see Certificate Pinning .

        "},{"location":"remote-sync-gateway/#client-authentication","title":"Client Authentication","text":"

        There are two ways to authenticate from a Couchbase Lite client: Basic Authentication or Session Authentication.

        "},{"location":"remote-sync-gateway/#basic-authentication","title":"Basic Authentication","text":"

        You can provide a username and password to the basic authenticator class method. Under the hood, the replicator will send the credentials in the first request to retrieve a SyncGatewaySession cookie and use it for all subsequent requests during the replication. This is the recommended way of using basic authentication. Example 7 shows how to initiate a one-shot replication as the user username with the password password.

        Example 7. Basic Authentication

        // Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        authenticator = BasicAuthenticator(\"username\", \"password\".toCharArray())\n    )\n)\nrepl.start()\nthis.replicator = repl\n
        "},{"location":"remote-sync-gateway/#session-authentication","title":"Session Authentication","text":"

        Session authentication is another way to authenticate with Sync Gateway.

        A user session must first be created through the POST /{db}/_session endpoint on the Public REST API.

        The HTTP response contains a session ID which can then be used to authenticate as the user it was created for.

        See Example 8, which shows how to initiate a one-shot replication with the session ID returned from the POST /{db}/_session endpoint.

        Example 8. Session Authentication

        // Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        authenticator = SessionAuthenticator(\"904ac010862f37c8dd99015a33ab5a3565fd8447\")\n    )\n)\nrepl.start()\nthis.replicator = repl\n
        "},{"location":"remote-sync-gateway/#custom-headers","title":"Custom Headers","text":"

        Custom headers can be set on the configuration object. The replicator will then include those headers in every request.

        This feature is useful in passing additional credentials, perhaps when an authentication or authorization step is being done by a proxy server (between Couchbase Lite and Sync Gateway) \u2014 see Example 9.

        Example 9. Setting custom headers

        // Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        headers = mapOf(\"CustomHeaderName\" to \"Value\")\n    )\n)\nrepl.start()\nthis.replicator = repl\n
        "},{"location":"remote-sync-gateway/#replication-filters","title":"Replication Filters","text":"

        Replication Filters allow you to have quick control over the documents stored as the result of a push and/or pull replication.

        "},{"location":"remote-sync-gateway/#push-filter","title":"Push Filter","text":"

        The push filter allows an app to push a subset of a database to the server. This can be very useful. For instance, high-priority documents could be pushed first, or documents in a \"draft\" state could be skipped.

        val collectionConfig = CollectionConfigurationFactory.newConfig(\n    pushFilter = { _, flags -> flags.contains(DocumentFlag.DELETED) }\n)\n\n// Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to collectionConfig)\n    )\n)\nrepl.start()\nthis.replicator = repl\n

        The callback should follow the semantics of a pure function. Otherwise, long-running functions would slow down the replicator considerably. Furthermore, your callback should not make assumptions about what thread it is being called on.

        "},{"location":"remote-sync-gateway/#pull-filter","title":"Pull Filter","text":"

        The pull filter gives an app the ability to validate documents being pulled, and skip ones that fail. This is an important security mechanism in a peer-to-peer topology with peers that are not fully trusted.

        Note

        Pull replication filters are not a substitute for channels. Sync Gateway channels are designed to be scalable (documents are filtered on the server) whereas a pull replication filter is applied to a document once it has been downloaded.

        val collectionConfig = CollectionConfigurationFactory.newConfig(\n    pullFilter = { document, _ -> \"draft\" == document.getString(\"type\") }\n)\n\n// Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to collectionConfig)\n    )\n)\nrepl.start()\nthis.replicator = repl\n

        The callback should follow the semantics of a pure function. Otherwise, long-running functions would slow down the replicator considerably. Furthermore, your callback should not make assumptions about what thread it is being called on.

        Losing access to a document via the Sync Function.

        Losing access to a document (via the Sync Function) also triggers the pull replication filter.

        Filtering out such an event would retain the document locally.

        As a result, there would be a local copy of the document disjointed from the one that resides on Couchbase Server.

        Further updates to the document stored on Couchbase Server would not be received in pull replications and further local edits could be pushed but the updated versions will not be visible.

        For more information, see Auto-purge on Channel Access Revocation.

        "},{"location":"remote-sync-gateway/#channels","title":"Channels","text":"

        By default, Couchbase Lite gets all the channels to which the configured user account has access.

        This behavior is suitable for most apps that rely on user authentication and the sync function to specify which data to pull for each user.

        Optionally, it\u2019s also possible to specify a string array of channel names on Couchbase Lite\u2019s replicator configuration object. In this case, the replication from Sync Gateway will only pull documents tagged with those channels.

        "},{"location":"remote-sync-gateway/#auto-purge-on-channel-access-revocation","title":"Auto-purge on Channel Access Revocation","text":"

        This is a Breaking Change at 3.0

        "},{"location":"remote-sync-gateway/#new-outcome","title":"New outcome","text":"

        By default, when a user loses access to a channel all documents in the channel (that do not also belong to any of the user\u2019s other channels) are auto-purged from the local database (in devices belonging to the user).

        "},{"location":"remote-sync-gateway/#prior-outcome","title":"Prior outcome","text":"

        Previously these documents remained in the local database

        Prior to CBL 3.0, CBL auto-purged only in the case when the user loses access to a document by removing the doc from all of the channels belonging to the user. Now, in addition to 2.x auto purge, Couchbase Lite also auto-purges the docs when the user loses access to the doc via channel access revocation. This feature is enabled by default, but an opt-out is available.

        "},{"location":"remote-sync-gateway/#behavior","title":"Behavior","text":"

        Users may lose access to channels in a number of ways:

        • User loses direct access to channel
        • User is removed from a role
        • A channel is removed from a role the user is assigned to

        By default, when a user loses access to a channel, the next Couchbase Lite pull replication auto-purges all documents in the channel from local Couchbase Lite databases (on devices belonging to the user) unless they belong to any of the user\u2019s other channels \u2014 see Table 2.

        Documents that exist in multiple channels belonging to the user (even if they are not actively replicating that channel) are not auto-purged unless the user loses access to all channels.

        Users will receive an ACCESS_REMOVED notification from the DocumentReplicationListener if they lose document access due to channel access revocation; this is sent regardless of the current auto-purge setting.

        Table 2. Behavior following access revocation

        System State Impact on Sync Replication Type Access Control on Sync Gateway Expected behavior when isAutoPurgeEnabled=true Pull only

        User REVOKED access to channel.

        Sync Function includes requireAccess(revokedChannel)

        Previously synced documents are auto purged on local

        Push only

        User REVOKED access to channel.

        Sync Function includes requireAccess(revokedChannel)

        No impact of auto-purge

        Documents get pushed but are rejected by Sync Gateway

        Push-pull

        User REVOKED access to channel.

        Sync Function includes requireAccess(revokedChannel)

        Previously synced documents are auto purged on Couchbase Lite.

        Local changes continue to be pushed to remote but are rejected by Sync Gateway

        If a user subsequently regains access to a lost channel, then any previously auto-purged documents still assigned to any of their channels are automatically pulled down by the active Sync Gateway when they are next updated \u2014 see behavior summary in Table 3.

        Table 3. Behavior if access is regained

        System State Impact on Sync Replication Type Access Control on Sync Gateway Expected behavior when isAutoPurgeEnabled=true Pull only User REASSIGNED access to channel

        Previously purged documents that are still in the channel are automatically pulled by Couchbase Lite when they are next updated

        Push only

        User REASSIGNED access to channel

        Sync Function includes requireAccess(reassignedChannel)

        No impact of auto-purge

        Local changes previously rejected by Sync Gateway will not be automatically pushed to remote unless resetCheckpoint is involved on CBL.

        Document changes subsequent to the channel reassignment will be pushed up as usual.

        Push-pull

        User REASSIGNED access to channel

        Sync Function includes requireAccess(reassignedChannel)

        Previously purged documents are automatically pulled by Couchbase Lite

        Local changes previously rejected by Sync Gateway will not be automatically pushed to remote unless resetCheckpoint is involved.

        Document changes subsequent to the channel reassignment will be pushed up as usual

        "},{"location":"remote-sync-gateway/#config","title":"Config","text":"

        Auto-purge behavior is controlled primarily by the ReplicationConfiguration option setAutoPurgeEnabled(). Changing the state of this will impact only future replications; the replicator will not attempt to sync revisions that were auto purged on channel access removal. Clients wishing to sync previously removed documents must use the resetCheckpoint API to resync from the start.

        Example 10. Setting auto-purge

        // set auto-purge behavior\n// (here we override default)\nenableAutoPurge = false,\n

        Here we have opted to turn off the auto purge behavior. By default auto purge is enabled.

        "},{"location":"remote-sync-gateway/#overrides","title":"Overrides","text":"

        Where necessary, clients can override the default auto-purge behavior. This can be done either by setting setAutoPurgeEnabled() to false, or for finer control by applying pull-filters \u2014 see Table 4 and Replication Filters This ensures backwards compatible with 2.8 clients that use pull filters to prevent auto purge of removed docs.

        Table 4. Impact of Pull-Filters

        purge_on_removal setting Pull Filter Not Defined Defined to filter removals/revoked docs disabled

        Doc remains in local database

        App notified of ACCESS_REMOVED if a DocumentReplicationListener is registered

        enabled (DEFAULT)

        Doc is auto purged

        App notified of ACCESS_REMOVED if DocumentReplicationListener registered

        Doc remains in local database"},{"location":"remote-sync-gateway/#delta-sync","title":"Delta Sync","text":"

        This is an Enterprise Edition feature.

        With Delta Sync, only the changed parts of a Couchbase document are replicated. This can result in significant savings in bandwidth consumption as well as throughput improvements, especially when network bandwidth is typically constrained.

        Replications to a Server (for example, a Sync Gateway, or passive listener) automatically use delta sync if the property is enabled at database level by the server \u2014 see Admin REST API delta_sync.enabled or legacy JSON configuration databases.$db.delta_sync.enabled.

        Intra-Device replications automatically disable delta sync, whilst Peer-to-Peer replications automatically enable delta sync.

        "},{"location":"remote-sync-gateway/#initialize","title":"Initialize","text":"

        In this section Start Replicator | Checkpoint Starts

        "},{"location":"remote-sync-gateway/#start-replicator","title":"Start Replicator","text":"

        Use the Replicator class\u2019s Replicator(ReplicatorConfiguration) constructor, to initialize the replicator with the configuration you have defined. You can, optionally, add a change listener (see Monitor) before starting the replicator running using start().

        Example 11. Initialize and run replicator

        // Create replicator\n// Consider holding a reference somewhere\n// to prevent the Replicator from being GCed\nval repl = Replicator( \n\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(collections to null),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // set auto-purge behavior\n        // (here we override default)\n        enableAutoPurge = false,\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n
        1. Initialize the replicator with the configuration
        2. Start the replicator
        "},{"location":"remote-sync-gateway/#checkpoint-starts","title":"Checkpoint Starts","text":"

        Replicators use checkpoints to keep track of documents sent to the target database.

        Without checkpoints, Couchbase Lite would replicate the entire database content to the target database on each connection, even though previous replications may already have replicated some or all of that content.

        This functionality is generally not a concern to application developers. However, if you do want to force the replication to start again from zero, use the checkpoint reset argument when starting the replicator \u2014 as shown in Example 12.

        Example 12. Resetting checkpoints

        repl.start(true)\n

        Set start\u2019s reset option to true. The default false is shown above for completeness only; it is unlikely you would explicitly use it in practice.

        "},{"location":"remote-sync-gateway/#monitor","title":"Monitor","text":"

        In this section Change Listeners | Replicator Status | Monitor Document Changes | Documents Pending Push

        You can monitor a replication\u2019s status by using a combination of Change Listeners and the replicator.status.activityLevel property \u2014 see activityLevel. This enables you to know, for example, when the replication is actively transferring data and when it has stopped.

        You can also choose to monitor document changes \u2014 see Monitor Document Changes.

        "},{"location":"remote-sync-gateway/#change-listeners","title":"Change Listeners","text":"

        Use this to monitor changes and to inform on sync progress; this is an optional step. You can add a replicator change listener at any point; it will report changes from the point it is registered.

        Tip

        Don\u2019t forget to save the token so you can remove the listener later

        Use the Replicator class to add a change listener as a callback with Replicator.addChangeListener() \u2014 see Example 13. You will then be asynchronously notified of state changes.

        You can remove a change listener with ListenerToken.remove().

        "},{"location":"remote-sync-gateway/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

        Kotlin developers can take advantage of Flows to monitor replicators.

        fun replChangeFlowExample(repl: Replicator): Flow<ReplicatorActivityLevel> {\n    return repl.replicatorChangesFlow()\n        .map { it.status.activityLevel }\n}\n
        "},{"location":"remote-sync-gateway/#replicator-status","title":"Replicator Status","text":"

        You can use the ReplicatorStatus class to check the replicator status. That is, whether it is actively transferring data or if it has stopped \u2014 see Example 13.

        The returned ReplicatorStatus structure comprises:

        • activityLevel \u2014 STOPPED, OFFLINE, CONNECTING, IDLE, or BUSY \u2014 see states described in Table 5
        • progress
          • completed \u2014 the total number of changes completed
          • total \u2014 the total number of changes to be processed
        • error \u2014 the current error, if any

        Example 13. Monitor replication

        Adding a Change ListenerUsing replicator.status
        val token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code :: ${err.code}\\n$err\")\n    }\n}\n
        repl.status.let {\n    val progress = it.progress\n    println(\n        \"The Replicator is ${\n            it.activityLevel\n        } and has processed ${\n            progress.completed\n        } of ${progress.total} changes\"\n    )\n}\n
        "},{"location":"remote-sync-gateway/#replication-states","title":"Replication States","text":"

        Table 5 shows the different states, or activity levels, reported in the API; and the meaning of each.

        Table 5. Replicator activity levels

        State Meaning STOPPED The replication is finished or hit a fatal error. OFFLINE The replicator is offline as the remote host is unreachable. CONNECTING The replicator is connecting to the remote host. IDLE The replication caught up with all the changes available from the server. The IDLE state is only used in continuous replications. BUSY The replication is actively transferring data.

        Note

        The replication change object also has properties to track the progress (change.status.completed and change.status.total). Since the replication occurs in batches the total count can vary through the course of a replication.

        "},{"location":"remote-sync-gateway/#replication-status-and-app-life-cycle","title":"Replication Status and App Life Cycle","text":""},{"location":"remote-sync-gateway/#ios","title":"iOS","text":"

        The following diagram describes the status changes when the application starts a replication, and when the application is being backgrounded or foregrounded by the OS. It applies to iOS only.

        Additionally, on iOS, an app already in the background may be terminated. In this case, the Database and Replicator instances will be null when the app returns to the foreground. Therefore, as preventive measure, it is recommended to do a null check when the app enters the foreground, and to re-initialize the database and replicator if any of those are null.

        On other platforms, Couchbase Lite doesn\u2019t react to OS backgrounding or foregrounding events and replication(s) will continue running as long as the remote system does not terminate the connection and the app does not terminate. It is generally recommended to stop replications before going into the background otherwise socket connections may be closed by the OS and this may interfere with the replication process.

        "},{"location":"remote-sync-gateway/#other-platforms","title":"Other Platforms","text":"

        Couchbase Lite replications will continue running until the app terminates, unless the remote system, or the application, terminates the connection.

        Note

        Recall that the Android OS may kill an application without warning. You should explicitly stop replication processes when they are no longer useful (for example, when the app is in the background and the replication is IDLE) to avoid socket connections being closed by the OS, which may interfere with the replication process.

        "},{"location":"remote-sync-gateway/#monitor-document-changes","title":"Monitor Document Changes","text":"

        You can choose to register for document updates during a replication.

        For example, the code snippet in Example 14 registers a listener to monitor document replication performed by the replicator referenced by the variable repl. It prints the document ID of each document received and sent. Stop the listener as shown in Example 15.

        Example 14. Register a document listener

        val token = repl.addDocumentReplicationListener { replication ->\n    println(\"Replication type: ${if (replication.isPush) \"push\" else \"pull\"}\")\n\n    for (doc in replication.documents) {\n        println(\"Doc ID: ${doc.id}\")\n\n        doc.error?.let {\n            // There was an error\n            println(\"Error replicating document: $it\")\n            return@addDocumentReplicationListener\n        }\n\n        if (doc.flags.contains(DocumentFlag.DELETED)) {\n            println(\"Successfully replicated a deleted document\")\n        }\n    }\n}\n\nrepl.start()\nthis.replicator = repl\n

        Example 15. Stop document listener

        This code snippet shows how to stop the document listener using the token from the previous example.

        token.remove()\n
        "},{"location":"remote-sync-gateway/#document-access-removal-behavior","title":"Document Access Removal Behavior","text":"

        When access to a document is removed on Sync Gateway (see Sync Gateway\u2019s Sync Function), the document replication listener sends a notification with the ACCESS_REMOVED flag set to true and subsequently purges the document from the database.

        "},{"location":"remote-sync-gateway/#documents-pending-push","title":"Documents Pending Push","text":"

        Tip

        Replicator.isDocumentPending() is quicker and more efficient. Use it in preference to returning a list of pending document IDs, where possible.

        You can check whether documents are waiting to be pushed in any forthcoming sync by using either of the following API methods:

        • Use the Replicator.getPendingDocumentIds() method, which returns a list of document IDs that have local changes, but which have not yet been pushed to the server. This can be very useful in tracking the progress of a push sync, enabling the app to provide a visual indicator to the end user on its status, or decide when it is safe to exit.
        • Use the Replicator.isDocumentPending() method to quickly check whether an individual document is pending a push.

        Example 16. Use Pending Document ID API

        val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(setOf(collection) to null),\n        type = ReplicatorType.PUSH\n    )\n)\n\nval pendingDocs = repl.getPendingDocumentIds()\n\n// iterate and report on previously\n// retrieved pending docIds 'list'\nif (pendingDocs.isNotEmpty()) {\n    println(\"There are ${pendingDocs.size} documents pending\")\n\n    val firstDoc = pendingDocs.first()\n    repl.addChangeListener { change ->\n        println(\"Replicator activity level is ${change.status.activityLevel}\")\n        try {\n            if (!repl.isDocumentPending(firstDoc)) {\n                println(\"Doc ID $firstDoc has been pushed\")\n            }\n        } catch (err: CouchbaseLiteException) {\n            println(\"Failed getting pending docs\\n$err\")\n        }\n    }\n\n    repl.start()\n    this.replicator = repl\n}\n
        1. Replicator.getPendingDocumentIds() returns a list of the document IDs for all documents waiting to be pushed. This is a snapshot and may have changed by the time the response is received and processed.
        2. Replicator.isDocumentPending() returns true if the document is waiting to be pushed, and false otherwise.
        "},{"location":"remote-sync-gateway/#stop","title":"Stop","text":"

        Stopping a replication is straightforward. It is done using stop(). This initiates an asynchronous operation and so is not necessarily immediate. Your app should account for this potential delay before attempting any subsequent operations.

        You can find further information on database operations in Databases.

        Example 17. Stop replicator

        // Stop replication.\nrepl.stop()\n

        Here we initiate the stopping of the replication using the stop() method. It will stop any active change listener once the replication is stopped.

        "},{"location":"remote-sync-gateway/#error-handling","title":"Error Handling","text":"

        When a replicator detects a network error it updates its status depending on the error type (permanent or temporary) and returns an appropriate HTTP error code.

        The following code snippet adds a change listener, which monitors a replication for errors and logs the returned error code.

        Example 18. Monitoring for network errors

        repl.addChangeListener { change ->\n    change.status.error?.let {\n        println(\"Error code: ${it.code}\")\n    }\n}\nrepl.start()\nthis.replicator = repl\n

        For permanent network errors (for example, 404 not found, or 401 unauthorized): Replicator will stop permanently, whether setContinuous is true or false. Of course, it sets its status to STOPPED.

        For recoverable or temporary errors: Replicator sets its status to OFFLINE, then:

        • If setContinuous=true it retries the connection indefinitely
        • If setContinuous=false (one-shot) it retries the connection a limited number of times.

        The following error codes are considered temporary by the Couchbase Lite replicator and thus will trigger a connection retry:

        • 408: Request Timeout
        • 429: Too Many Requests
        • 500: Internal Server Error
        • 502: Bad Gateway
        • 503: Service Unavailable
        • 504: Gateway Timeout
        • 1001: DNS resolution error
        "},{"location":"remote-sync-gateway/#using-kotlin-flows_1","title":"Using Kotlin Flows","text":"

        Kotlin developers can also take advantage of Flows to monitor replicators.

        scope.launch {\n    repl.replicatorChangesFlow()\n        .mapNotNull { it.status.error }\n        .collect { error ->\n            println(\"Replication error :: $error\")\n        }\n}\n
        "},{"location":"remote-sync-gateway/#load-balancers","title":"Load Balancers","text":"

        Couchbase Lite uses WebSockets as the communication protocol to transmit data. Some load balancers are not configured for WebSocket connections by default (NGINX for example); so it might be necessary to explicitly enable them in the load balancer\u2019s configuration \u2014 see Load Balancers.

        By default, the WebSocket protocol uses compression to optimize for speed and bandwidth utilization. The level of compression is set on Sync Gateway and can be tuned in the configuration file (replicator_compression).

        "},{"location":"remote-sync-gateway/#certificate-pinning","title":"Certificate Pinning","text":"

        Couchbase Lite supports certificate pinning.

        Certificate pinning is a technique that can be used by applications to \"pin\" a host to its certificate. The certificate is typically delivered to the client by an out-of-band channel and bundled with the client. In this case, Couchbase Lite uses this embedded certificate to verify the trustworthiness of the server (for example, a Sync Gateway) and no longer needs to rely on a trusted third party for that (commonly referred to as the Certificate Authority).

        For the 3.0.2. release, changes have been made to the way certificates on the host are matched:

        Prior to CBL 3.0.2 The pinned certificate was only compared with the leaf certificate of the host. This is not always suitable as leaf certificates are usually valid for shorter periods of time. CBL 3.0.2+ The pinned certificate will be compared against any certificate in the server\u2019s certificate chain.

        The following steps describe how to configure certificate pinning between Couchbase Lite and Sync Gateway:

        1. Create your own self-signed certificate with the openssl command. After completing this step, you should have 3 files: cert.pem, cert.cer, and privkey.pem.
        2. Configure Sync Gateway with the cert.pem and privkey.pem files. After completing this step, Sync Gateway is reachable over https/wss.
        3. On the Couchbase Lite side, the replication must point to a URL with the wss scheme and configured with the cert.cer file created in step 1.

        This example loads the certificate from the application sandbox, then converts it to the appropriate type to configure the replication object.

        Example 19. Cert Pinnings

        val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        pinnedServerCertificate = PlatformUtils.getAsset(\"cert.cer\")?.readByteArray()\n    )\n)\nrepl.start()\nthis.replicator = repl\n

        Note

        PlatformUtils.getAsset() needs to be implemented in a platform-specific way \u2014 see example in Kotbase tests.

        The replication should now run successfully over https/wss with certificate pinning.

        For more on pinning certificates see the blog entry: Certificate Pinning with Couchbase Mobile.

        "},{"location":"remote-sync-gateway/#troubleshooting","title":"Troubleshooting","text":""},{"location":"remote-sync-gateway/#logs","title":"Logs","text":"

        As always, when there is a problem with replication, logging is your friend. You can increase the log output for activity related to replication with Sync Gateway \u2014 see Example 20.

        Example 20. Set logging verbosity

        Database.log.console.setDomains(LogDomain.REPLICATOR)\nDatabase.log.console.level = LogLevel.DEBUG\n

        For more on troubleshooting with logs, see Using Logs.

        "},{"location":"remote-sync-gateway/#authentication-errors","title":"Authentication Errors","text":"

        If Sync Gateway is configured with a self-signed certificate but your app points to a ws scheme instead of wss you will encounter an error with status code 11006 \u2014 see Example 21.

        Example 21. Protocol Mismatch

        CouchbaseLite Replicator ERROR: {Repl#2} Got LiteCore error: WebSocket error 1006 \"connection closed abnormally\"\n

        If Sync Gateway is configured with a self-signed certificate, and your app points to a wss scheme but the replicator configuration isn\u2019t using the certificate you will encounter an error with status code 5011 \u2014 see Example 22 .

        Example 22. Certificate Mismatch or Not Found

        CouchbaseLite Replicator ERROR: {Repl#2} Got LiteCore error: Network error 11 \"server TLS certificate is self-signed or has unknown root cert\"\n
        "},{"location":"roadmap/","title":"Roadmap","text":"
        • Documentation website (kotbase.dev)
        • NSInputStream interoperability (Okio #1123) (kotlinx-io #174)
        • Linux ARM64 support
        • Public release
        • Sample apps
          • Getting Started
          • Getting Started Compose Multiplatform
        • Couchbase Lite 3.1 API - Scopes and Collections
        • Versioned docs
        • Async coroutines API
        "},{"location":"scopes-and-collections/","title":"Scopes and Collections","text":"

        Scopes and collections allow you to organize your documents within a database.

        At a glance

        Use collections to organize your content in a database

        For example, if your database contains travel information, airport documents can be assigned to an airports collection, hotel documents can be assigned to a hotels collection, and so on.

        • Document names must be unique within their collection.

        Use scopes to group multiple collections

        Collections can be assigned to different scopes according to content-type or deployment-phase (for example, test versus production).

        • Collection names must be unique within their scope.
        "},{"location":"scopes-and-collections/#default-scopes-and-collections","title":"Default Scopes and Collections","text":"

        Every database you create contains a default scope and a default collection named _default.

        If you create a document in the database and don\u2019t specify a specific scope or collection, it is saved in the default collection, in the default scope.

        If you upgrade from a version of Couchbase Lite prior to 3.1, all existing data is automatically placed in the default scope and default collection.

        The default scope and collection cannot be dropped.

        "},{"location":"scopes-and-collections/#create-a-scope-and-collection","title":"Create a Scope and Collection","text":"

        In addition to the default scope and collection, you can create your own scope and collection when you create a document.

        Naming conventions for collections and scopes:

        • Must be between 1 and 251 characters in length.
        • Can only contain the characters A-Z, a-z, 0-9, and the symbols _, -, and %.
        • Cannot start with _ or %.
        • Scope names must be unique in databases.
        • Collection names must be unique within a scope.

        Note

        Scope and collection names are case sensitive.

        Example 1. Create a scope and collection

        // create the collection \"Verlaine\" in the default scope (\"_default\")\nvar collection1: Collection? = db.createCollection(\"Verlaine\")\n// both of these retrieve collection1 created above\ncollection1 = db.getCollection(\"Verlaine\")\ncollection1 = db.defaultScope.getCollection(\"Verlaine\")\n\n// create the collection \"Verlaine\" in the scope \"Television\"\nvar collection2: Collection? = db.createCollection(\"Television\", \"Verlaine\")\n// both of these retrieve  collection2 created above\ncollection2 = db.getCollection(\"Television\", \"Verlaine\")\ncollection2 = db.getScope(\"Television\")!!.getCollection(\"Verlaine\")\n

        In the example above, you can see that db.createCollection() can take two parameters. The first is the scope assigned to the created collection, if this parameter is omitted then a collection of the given name will be assigned to the _default scope. In this case, creating a collection called Verlaine.

        The second parameter is the name of the collection you want to create, in this case Verlaine. In the second section of the example you can see db.createCollection(\"Television\", \"Verlaine\"). This creates the collection Verlaine and then checks to see if the scope Television exists. If the scope Television exists, the collection Verlaine is assigned to the scope Television. If not, a new scope, Television, is created and then the collection Verlaine is assigned to it.

        Note

        You cannot create an empty user-defined scope. A scope is implicitly created in the db.createCollection() method.

        "},{"location":"scopes-and-collections/#index-a-collection","title":"Index a Collection","text":"

        Example 2. Index a Collection

        // Create an index named \"nameIndex1\" on the property \"lastName\" in the collection using the IndexBuilder\ncollection.createIndex(\"nameIndex1\", IndexBuilder.valueIndex(ValueIndexItem.property(\"lastName\")))\n\n// Create a similar index named \"nameIndex2\" using an IndexConfiguration\ncollection.createIndex(\"nameIndex2\", ValueIndexConfiguration(\"lastName\"))\n\n// get the names of all the indices in the collection\nval indices = collection.indexes\n\n// delete all the collection indices\nindices.forEach { collection.deleteIndex(it) }\n
        "},{"location":"scopes-and-collections/#drop-a-collection","title":"Drop a Collection","text":"

        Example 3. Drop a Collection

        db.getCollection(collectionName, scopeName)?.let {\n    db.deleteCollection(it.name, it.scope.name)\n}\n

        Note

        There is no need to drop a user-defined scope. User-defined scopes are dropped when the collections associated with them contain no documents.

        "},{"location":"scopes-and-collections/#list-scopes-and-collections","title":"List Scopes and Collections","text":"

        Example 4. List Scopes and Collections

        // List all of the collections in each of the scopes in the database\ndb.scopes.forEach { scope ->\n    println(\"Scope :: ${scope.name}\")\n    scope.collections.forEach {\n        println(\"    Collection :: ${it.name}\")\n    }\n}\n
        "},{"location":"using-logs/","title":"Using Logs","text":"

        Couchbase Lite \u2014 Using Logs for Troubleshooting

        Constraints

        The retrieval of logs from the device is out of scope of this feature.

        "},{"location":"using-logs/#introduction","title":"Introduction","text":"

        Couchbase Lite provides a robust Logging API \u2014 see API References for Log, FileLogger, and LogFileConfiguration \u2014 which make debugging and troubleshooting easier during development and in production. It delivers flexibility in terms of how logs are generated and retained, whilst also maintaining the level of logging required by Couchbase Support for investigation of issues.

        Log output is split into the following streams:

        • Console based logging You can independently configure and control console logs, which provides a convenient method of accessing diagnostic information during debugging scenarios. With console logging, you can fine-tune diagnostic output to suit specific debug scenarios, without interfering with any logging required by Couchbase Support for the investigation of issues.
        • File based logging Here logs are written to separate log files, filtered by log level, with each log level supporting individual retention policies.
        • Custom logging For greater flexibility you can implement a custom logging class using the Logger interface.

        In all instances, you control what is logged and at what level using the Log class.

        "},{"location":"using-logs/#console-based-logging","title":"Console based logging","text":"

        Console based logging is often used to facilitate troubleshooting during development.

        Console logs are your go-to resource for diagnostic information. You can easily fine-tune their diagnostic content to meet the needs of a particular debugging scenario, perhaps by increasing the verbosity and-or choosing to focus on messages from a specific domain; to better focus on the problem area.

        Changes to console logging are independent of file logging, so you can make changes without compromising any file logging streams. It is enabled by default. To change default settings use the Database.log property to set the required values \u2014 see Example 1 .

        You will primarily use log.console and ConsoleLogger to control console logging.

        Example 1. Change Console Logging Settings

        This example enables and defines console-based logging settings.

        Database.log.console.domains = LogDomain.ALL_DOMAINS\nDatabase.log.console.level = LogLevel.VERBOSE\n
        1. Define the required domain; here we turn on logging for all available domains \u2014 see ConsoleLogger.domains and enum LogDomain.
        2. Here we turn on the most verbose log level \u2014 see ConsoleLogger.level and enum LogLevel. To disable logging for the specified LogDomains set the LogLevel to NONE.
        "},{"location":"using-logs/#file-based-logging","title":"File based logging","text":"

        File based logging is disabled by default \u2014 see Example 2 for how to enable it.

        You will primarily use Log.file and FileLogger to control file-based logging.

        "},{"location":"using-logs/#formats","title":"Formats","text":"

        Available file based logging formats:

        • Binary \u2014 most efficient for storage and performance. It is the default for file based logging.<br. Use this format and a decoder, such as cbl-log, to view them \u2014 see Decoding binary logs.
        • Plaintext
        "},{"location":"using-logs/#configuration","title":"Configuration","text":"

        As with console logging you can set the log level \u2014 see the FileLogger class.

        With file based logging you can also use the LogFileConfiguration class\u2019s properties to specify the:

        • Path to the directory to store the log files
        • Log file format The default is binary. You can override that where necessary and output a plain text log.
        • Maximum number of rotated log files to keep
        • Maximum size of the log file (bytes). Once this limit is exceeded a new log file is started.

        Example 2. Enabling file logging

        Database.log.file.apply {\n    config = LogFileConfigurationFactory.newConfig(\n        directory = \"temp/cbl-logs\",\n        maxSize = 10240,\n        maxRotateCount = 5,\n        usePlainText = false\n    )\n    level = LogLevel.INFO\n}\n
        1. Set the log file directory
        2. Change the max rotation count from the default (1) to 5 Note this means six files may exist at any one time; the five rotated log files, plus the active log file
        3. Set the maximum size (bytes) for our log file
        4. Select the binary log format (included for reference only as this is the default)
        5. Increase the log output level from the default (WARNING) to INFO \u2014 see FileLogger.level

        Tip

        \"temp/cbl-logs\" might be a platform-specific location. Use expect/actual or dependency injection to provide a platform-specific log file path.

        "},{"location":"using-logs/#custom-logging","title":"Custom logging","text":"

        Couchbase Lite allows for the registration of a callback function to receive Couchbase Lite log messages, which may be logged using any external logging framework.

        To do this, apps must implement the Logger interface \u2014 see Example 3 \u2014 and enable custom logging using Log.custom \u2014 see Example 4.

        Example 3. Implementing logger interface

        Here we introduce the code that implements the Logger interface.

        class LogTestLogger(override val level: LogLevel) : Logger {\n    override fun log(level: LogLevel, domain: LogDomain, message: String) {\n        // this method will never be called if param level < this.level\n        // handle the message, for example piping it to a third party framework\n    }\n}\n

        Example 4. Enabling custom logging

        This example show how to enable the custom logger from Example 3.

        // this custom logger will not log an event with a log level < WARNING\nDatabase.log.custom = LogTestLogger(LogLevel.WARNING) \n

        Here we set the custom logger with a level of WARNING. The custom logger is called with every log and may choose to filter it, using its configured level.

        "},{"location":"using-logs/#decoding-binary-logs","title":"Decoding binary logs","text":"

        You can use the cbl-log tool to decode binary log files \u2014 see Example 5.

        Example 5. Using the cbl-log tool

        macOSCentOSWindows

        Download the cbl-log tool using wget.

        console
        wget https://packages.couchbase.com/releases/couchbase-lite-log/3.1.1/couchbase-lite-log-3.1.1-macos.zip\n

        Navigate to the bin directory and run the cbl-log executable.

        console
        ./cbl-log logcat LOGFILE <OUTPUT_PATH>\n

        Download the cbl-log tool using wget.

        console
        wget https://packages.couchbase.com/releases/couchbase-lite-log/3.1.1/couchbase-lite-log-3.1.1-centos.zip\n

        Navigate to the bin directory and run the cbl-log executable.

        console
        ./cbl-log logcat LOGFILE <OUTPUT_PATH>\n

        Download the cbl-log tool using PowerShell.

        PowerShell
        Invoke-WebRequest https://packages.couchbase.com/releases/couchbase-lite-log/3.1.1/couchbase-lite-log-3.1.1-windows.zip -OutFile couchbase-lite-log-3.1.1-windows.zip\n

        Navigate to the bin directory and run the cbl-log executable.

        PowerShell
        .\\cbl-log.exe logcat LOGFILE <OUTPUT_PATH>\n
        "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Kotbase","text":"

        Kotlin Multiplatform library for Couchbase Lite

        "},{"location":"#introduction","title":"Introduction","text":"

        Kotbase pairs Kotlin Multiplatform with Couchbase Lite, an embedded NoSQL JSON document database. Couchbase Lite can be used as a standalone client database, or paired with Couchbase Server and Sync Gateway or Capella App Services for cloud to edge data synchronization. Features include:

        • SQL++, key/value, and full-text search queries
        • Observable queries, documents, databases, and replicators
        • Binary document attachments (blobs)
        • Peer-to-peer and cloud-to-edge data sync

        Kotbase provides full Enterprise and Community Edition API support for Android and JVM, native iOS and macOS, and experimental support for available APIs in native Linux and Windows.

        "},{"location":"active-peer/","title":"Active Peer","text":"

        How to set up a replicator to connect with a listener and replicate changes using peer-to-peer sync

        Android enablers

        Allow Unencrypted Network Traffic

        To use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

        Use Background Threads

        As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

        Code Snippets

        All code examples are indicative only. They demonstrate the basic concepts and approaches to using a feature. Use them as inspiration and adapt these examples to best practice when developing applications for your platform.

        "},{"location":"active-peer/#introduction","title":"Introduction","text":"

        This is an Enterprise Edition feature.

        This content provides sample code and configuration examples covering the implementation of Peer-to-Peer Sync over WebSockets. Specifically it covers the implementation of an Active Peer.

        This active peer (also referred to as a client and-or a replicator) will initiate the connection with a Passive Peer (also referred to as a server and-or listener) and participate in the replication of database changes to bring both databases into sync.

        Subsequent sections provide additional details and examples for the main configuration options.

        Secure Storage

        The use of TLS, its associated keys and certificates requires using secure storage to minimize the chances of a security breach. The implementation of this storage differs from platform to platform \u2014 see Using Secure Storage.

        "},{"location":"active-peer/#configuration-summary","title":"Configuration Summary","text":"

        You should configure and initialize a replicator for each Couchbase Lite database instance you want to sync. Example 1 shows the initialization and configuration process.

        Note

        As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

        Example 1. Replication configuration and initialization

        val repl = Replicator(\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(\n            collections to CollectionConfiguration(\n                conflictResolver = ReplicatorConfiguration.DEFAULT_CONFLICT_RESOLVER\n            )\n        ),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Optionally add a change listener\nval token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code ::  ${err.code}\\n$err\")\n    }\n}\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n
        1. Get the listener\u2019s endpoint. Here we use a known URL, but it could be a URL established dynamically in a discovery phase.
        2. Identify the collections from the local database to be used.
        3. Configure how the replication should perform Conflict Resolution.
        4. Configure how the client will authenticate the server. Here we say connect only to servers presenting a self-signed certificate. By default, clients accept only servers presenting certificates that can be verified using the OS bundled Root CA Certificates \u2014 see Authenticating the Listener.
        5. Configure the credentials the client will present to the server. Here we say to provide Basic Authentication credentials. Other options are available \u2014 see Example 7.
        6. Initialize the replicator using your configuration object.
        7. Register an observer, which will notify you of changes to the replication status.
        8. Start the replicator.
        "},{"location":"active-peer/#device-discovery","title":"Device Discovery","text":"

        This phase is optional: If the listener is initialized on a well known URL endpoint (for example, a static IP address or well-known DNS address) then you can configure Active Peers to connect to those.

        Prior to connecting with a listener you may execute a peer discovery phase to dynamically discover peers.

        For the Active Peer this involves browsing-for and selecting the appropriate service using a zero-config protocol such as Network Service Discovery on Android or Bonjour on iOS.

        "},{"location":"active-peer/#configure-replicator","title":"Configure Replicator","text":"

        In this section Configure Target | Sync Mode | Retry Configuration | Authenticating the Listener | Client Authentication

        "},{"location":"active-peer/#configure-target","title":"Configure Target","text":"

        Initialize and define the replication configuration with local and remote database locations using the ReplicatorConfiguration object.

        The constructor provides the server\u2019s URL (including the port number and the name of the remote database to sync with).

        It is expected that the app will identify the IP address and URL and append the remote database name to the URL endpoint, producing for example: wss://10.0.2.2:4984/travel-sample.

        The URL scheme for WebSocket URLs uses ws: (non-TLS) or wss: (SSL/TLS) prefixes.

        Note

        On the Android platform, to use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

        Add the database collections to sync along with the CollectionConfiguration for each to the ReplicatorConfiguration. Multiple collections can use the same configuration, or each their own as needed. A null configuration will use the default configuration values, found in Defaults.Replicator.

        Example 2. Add Target to Configuration

        // initialize the replicator configuration\nval config = ReplicatorConfigurationFactory.newConfig(\n    target = URLEndpoint(\"wss://10.0.2.2:8954/travel-sample\"),\n    collections = mapOf(collections to null)\n)\n

        Note use of the scheme prefix (wss:// to ensure TLS encryption \u2014 strongly recommended in production \u2014 or ws://).

        "},{"location":"active-peer/#sync-mode","title":"Sync Mode","text":"

        Here we define the direction and type of replication we want to initiate.

        We use ReplicatorConfiguration class\u2019s type and isContinuous properties to tell the replicator:

        • The type (or direction) of the replication: PUSH_AND_PULL; PULL; PUSH
        • The replication mode, that is either of:
          • Continuous \u2014 remaining active indefinitely to replicate changed documents (isContinuous=true).
          • Ad-hoc \u2014 a one-shot replication of changed documents (isContinuous=false).

        Example 3. Configure replicator type and mode

        // Set replicator type\ntype = ReplicatorType.PUSH_AND_PULL,\n\n// Configure Sync Mode\ncontinuous = false, // default value\n

        Tip

        Unless there is a solid use-case not to, always initiate a single PUSH_AND_PULL replication rather than identical separate PUSH and PULL replications.

        This prevents the replications generating the same checkpoint docID resulting in multiple conflicts.

        "},{"location":"active-peer/#retry-configuration","title":"Retry Configuration","text":"

        Couchbase Lite\u2019s replication retry logic assures a resilient connection.

        The replicator minimizes the chance and impact of dropped connections by maintaining a heartbeat; essentially pinging the listener at a configurable interval to ensure the connection remains alive.

        In the event it detects a transient error, the replicator will attempt to reconnect, stopping only when the connection is re-established, or the number of retries exceeds the retry limit (9 times for a single-shot replication and unlimited for a continuous replication).

        On each retry the interval between attempts is increased exponentially (exponential backoff) up to the maximum wait time limit (5 minutes).

        The REST API provides configurable control over this replication retry logic using a set of configurable properties \u2014 see Table 1.

        Table 1. Replication Retry Configuration Properties

        Property Use cases Description setHeartbeat()
        • Reduce to detect connection errors sooner
        • Align to load-balancer or proxy keep-alive interval \u2014 see Sync Gateway\u2019s topic Load Balancer - Keep Alive
        The interval (in seconds) between the heartbeat pulses.Default: The replicator pings the listener every 300 seconds. setMaxAttempts() Change this to limit or extend the number of retry attempts. The maximum number of retry attempts
        • Set to zero (0) to use default values
        • Set to one (1) to prevent any retry attempt
        • The retry attempt count is reset when the replicator is able to connect and replicate
        • Default values are:
          • Single-shot replication = 9;
          • Continuous replication = maximum integer value
        • Negative values generate a Couchbase exception InvalidArgumentException
        setMaxAttemptWaitTime() Change this to adjust the interval between retries. The maximum interval between retry attemptsWhile you can configure the maximum permitted wait time, the replicator\u2019s exponential backoff algorithm calculates each individual interval which is not configurable.
        • Default value: 300 seconds (5 minutes)
        • Zero sets the maximum interval between retries to the default of 300 seconds
        • 300 sets the maximum interval between retries to the default of 300 seconds
        • A negative value generates a Couchbase exception, InvalidArgumentException

        When necessary you can adjust any or all of those configurable values \u2014 see Example 4 for how to do this.

        Example 4. Configuring Replication Retries

        val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        //  other config params as required . .\n        heartbeat = 150, \n        maxAttempts = 20,\n        maxAttemptWaitTime = 600\n    )\n)\nrepl.start()\nthis.replicator = repl\n
        "},{"location":"active-peer/#authenticating-the-listener","title":"Authenticating the Listener","text":"

        Define the credentials your app (the client) is expecting to receive from the server (listener) in order to ensure that the server is one it is prepared to interact with.

        Note that the client cannot authenticate the server if TLS is turned off. When TLS is enabled (listener\u2019s default) the client must authenticate the server. If the server cannot provide acceptable credentials then the connection will fail.

        Use ReplicatorConfiguration properties setAcceptOnlySelfSignedServerCertificate and setPinnedServerCertificate, to tell the replicator how to verify server-supplied TLS server certificates.

        • If there is a pinned certificate, nothing else matters, the server cert must exactly match the pinned certificate.
        • If there are no pinned certs and setAcceptOnlySelfSignedServerCertificate is true then any self-signed certificate is accepted. Certificates that are not self-signed are rejected, no matter who signed them.
        • If there are no pinned certificates and setAcceptOnlySelfSignedServerCertificate is false (default), the client validates the server\u2019s certificates against the system CA certificates. The server must supply a chain of certificates whose root is signed by one of the certificates in the system CA bundle.

        Example 5. Set Server TLS security

        CA CertSelf-Signed CertPinned Certificate

        Set the client to expect and accept only CA attested certificates.

        // Configure Server Security\n// -- only accept CA attested certs\nacceptOnlySelfSignedServerCertificate = false,\n

        This is the default. Only certificate chains with roots signed by a trusted CA are allowed. Self-signed certificates are not allowed.

        Set the client to expect and accept only self-signed certificates.

        // Configure Server Authentication --\n// only accept self-signed certs\nacceptOnlySelfSignedServerCertificate = true,\n

        Set this to true to accept any self-signed cert. Any certificates that are not self-signed are rejected.

        Set the client to expect and accept only a pinned certificate.

        // Use the pinned certificate from the byte array (cert)\npinnedServerCertificate = TLSIdentity.getIdentity(\"Our Corporate Id\")\n    ?.certs?.firstOrNull()\n    ?: throw IllegalStateException(\"Cannot find corporate id\"),\n

        Configure the pinned certificate using data from the byte array cert

        "},{"location":"active-peer/#client-authentication","title":"Client Authentication","text":"

        Here we define the credentials that the client can present to the server if prompted to do so in order that the server can authenticate it.

        We use ReplicatorConfiguration's authenticator property to define the authentication method to the replicator.

        "},{"location":"active-peer/#basic-authentication","title":"Basic Authentication","text":"

        Use the BasicAuthenticator to supply basic authentication credentials (username and password).

        Example 6. Basic Authentication

        This example shows basic authentication using username and password:

        // Configure the credentials the\n// client will provide if prompted\nauthenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n
        "},{"location":"active-peer/#certificate-authentication","title":"Certificate Authentication","text":"

        Use the ClientCertificateAuthenticator to configure the client TLS certificates to be presented to the server, on connection. This applies only to the URLEndpointListener.

        Note

        The server (listener) must have isTlsDisabled set to false and have a ListenerCertificateAuthenticator configured, or it will never ask for this client\u2019s certificate.

        The certificate to be presented to the server will need to be signed by the root certificates or be valid based on the authentication callback set to the listener via ListenerCertificateAuthenticator.

        Example 7. Client Cert Authentication

        This example shows client certificate authentication using an identity from secure storage.

        // Provide a client certificate to the server for authentication\nauthenticator = ClientCertificateAuthenticator(\n    TLSIdentity.getIdentity(\"clientId\")\n        ?: throw IllegalStateException(\"Cannot find client id\")\n)\n
        1. Get an identity from secure storage and create a TLSIdentity object
        2. Set the authenticator to ClientCertificateAuthenticator and configure it to use the retrieved identity
        "},{"location":"active-peer/#initialize-replicator","title":"Initialize Replicator","text":"

        Use the Replicator class\u2019s Replicator(ReplicatorConfiguration) constructor, to initialize the replicator with the configuration you have defined. You can, optionally, add a change listener (see Monitor Sync) before starting the replicator running using start().

        Example 8. Initialize and run replicator

        // Create replicator\n// Consider holding a reference somewhere\n// to prevent the Replicator from being GCed\nval repl = Replicator(\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(collections to null),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // set auto-purge behavior\n        // (here we override default)\n        enableAutoPurge = false,\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n
        1. Initialize the replicator with the configuration
        2. Start the replicator
        "},{"location":"active-peer/#monitor-sync","title":"Monitor Sync","text":"

        In this section Change Listeners | Replicator Status | Documents Pending Push

        You can monitor a replication\u2019s status by using a combination of Change Listeners and the replicator.status.activityLevel property \u2014 seeactivityLevel. This enables you to know, for example, when the replication is actively transferring data and when it has stopped.

        "},{"location":"active-peer/#change-listeners","title":"Change Listeners","text":"

        Use this to monitor changes and to inform on sync progress; this is an optional step. You can add a replicator change listener at any point; it will report changes from the point it is registered.

        Tip

        Don\u2019t forget to save the token so you can remove the listener later

        Use the Replicator class to add a change listener as a callback with Replicator.addChangeListener() \u2014 see Example 9 . You will then be asynchronously notified of state changes.

        You can remove a change listener with removeChangeListener(ListenerToken).

        "},{"location":"active-peer/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

        Kotlin developers can take advantage of Flows to monitor replicators.

        fun replChangeFlowExample(repl: Replicator): Flow<ReplicatorActivityLevel> {\n    return repl.replicatorChangesFlow()\n        .map { it.status.activityLevel }\n}\n
        "},{"location":"active-peer/#replicator-status","title":"Replicator Status","text":"

        You can use the ReplicatorStatus class to check the replicator status. That is, whether it is actively transferring data or if it has stopped \u2014 see Example 9.

        The returned ReplicatorStatus structure comprises:

        • activityLevel \u2014 STOPPED, OFFLINE, CONNECTING, IDLE, or BUSY \u2014 see states described in Table 2
        • progress
          • completed \u2014 the total number of changes completed
          • total \u2014 the total number of changes to be processed
        • error \u2014 the current error, if any

        Example 9. Monitor replication

        Adding a Change ListenerUsing replicator.status
        val token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code :: ${err.code}\\n$err\")\n    }\n}\n
        repl.status.let {\n    val progress = it.progress\n    println(\n        \"The Replicator is ${\n            it.activityLevel\n        } and has processed ${\n            progress.completed\n        } of ${progress.total} changes\"\n    )\n}\n
        "},{"location":"active-peer/#replication-states","title":"Replication States","text":"

        Table 2 shows the different states, or activity levels, reported in the API; and the meaning of each.

        Table 2. Replicator activity levels

        State Meaning STOPPED The replication is finished or hit a fatal error. OFFLINE The replicator is offline as the remote host is unreachable. CONNECTING The replicator is connecting to the remote host. IDLE The replication caught up with all the changes available from the server. The IDLE state is only used in continuous replications. BUSY The replication is actively transferring data.

        Note

        The replication change object also has properties to track the progress (change.status.completed and change.status.total). Since the replication occurs in batches the total count can vary through the course of a replication.

        "},{"location":"active-peer/#replication-status-and-app-life-cycle","title":"Replication Status and App Life Cycle","text":""},{"location":"active-peer/#ios","title":"iOS","text":"

        The following diagram describes the status changes when the application starts a replication, and when the application is being backgrounded or foregrounded by the OS. It applies to iOS only.

        Additionally, on iOS, an app already in the background may be terminated. In this case, the Database and Replicator instances will be null when the app returns to the foreground. Therefore, as preventive measure, it is recommended to do a null check when the app enters the foreground, and to re-initialize the database and replicator if any of those are null.

        On other platforms, Couchbase Lite doesn\u2019t react to OS backgrounding or foregrounding events and replication(s) will continue running as long as the remote system does not terminate the connection and the app does not terminate. It is generally recommended to stop replications before going into the background otherwise socket connections may be closed by the OS and this may interfere with the replication process.

        "},{"location":"active-peer/#other-platforms","title":"Other Platforms","text":"

        Couchbase Lite replications will continue running until the app terminates, unless the remote system, or the application, terminates the connection.

        Note

        Recall that the Android OS may kill an application without warning. You should explicitly stop replication processes when they are no longer useful (for example, when the app is in the background and the replication is IDLE) to avoid socket connections being closed by the OS, which may interfere with the replication process.

        "},{"location":"active-peer/#documents-pending-push","title":"Documents Pending Push","text":"

        Tip

        Replicator.isDocumentPending() is quicker and more efficient. Use it in preference to returning a list of pending document IDs, where possible.

        You can check whether documents are waiting to be pushed in any forthcoming sync by using either of the following API methods:

        • Use the Replicator.getPendingDocumentIds() method, which returns a list of document IDs that have local changes, but which have not yet been pushed to the server. This can be very useful in tracking the progress of a push sync, enabling the app to provide a visual indicator to the end user on its status, or decide when it is safe to exit.
        • Use the Replicator.isDocumentPending() method to quickly check whether an individual document is pending a push.

        Example 10. Use Pending Document ID API

        val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(setOf(collection) to null),\n        type = ReplicatorType.PUSH\n    )\n)\n\nval pendingDocs = repl.getPendingDocumentIds(collection)\n\n// iterate and report on previously\n// retrieved pending docIds 'list'\nif (pendingDocs.isNotEmpty()) {\n    println(\"There are ${pendingDocs.size} documents pending\")\n\n    val firstDoc = pendingDocs.first()\n    repl.addChangeListener { change ->\n        println(\"Replicator activity level is ${change.status.activityLevel}\")\n        try {\n            if (!repl.isDocumentPending(firstDoc)) {\n                println(\"Doc ID $firstDoc has been pushed\")\n            }\n        } catch (err: CouchbaseLiteException) {\n            println(\"Failed getting pending docs\\n$err\")\n        }\n    }\n\n    repl.start()\n    this.replicator = repl\n}\n
        1. Replicator.getPendingDocumentIds() returns a list of the document IDs for all documents waiting to be pushed. This is a snapshot and may have changed by the time the response is received and processed.
        2. Replicator.isDocumentPending() returns true if the document is waiting to be pushed, and false otherwise.
        "},{"location":"active-peer/#stop-sync","title":"Stop Sync","text":"

        Stopping a replication is straightforward. It is done using stop(). This initiates an asynchronous operation and so is not necessarily immediate. Your app should account for this potential delay before attempting any subsequent operations.

        You can find further information on database operations in Databases.

        Example 11. Stop replicator

        // Stop replication.\nrepl.stop()\n

        Here we initiate the stopping of the replication using the stop() method. It will stop any active change listener once the replication is stopped.

        "},{"location":"active-peer/#conflict-resolution","title":"Conflict Resolution","text":"

        Unless you specify otherwise, Couchbase Lite\u2019s default conflict resolution policy is applied \u2014 see Handling Data Conflicts.

        To use a different policy, specify a conflict resolver using conflictResolver as shown in Example 12.

        For more complex solutions you can provide a custom conflict resolver - see Handling Data Conflicts.

        Example 12. Using conflict resolvers

        Local WinsRemote WinsMerge
        val localWinsResolver: ConflictResolver = { conflict ->\n    conflict.localDocument\n}\nconfig.conflictResolver = localWinsResolver\n
        val remoteWinsResolver: ConflictResolver = { conflict ->\n    conflict.remoteDocument\n}\nconfig.conflictResolver = remoteWinsResolver\n
        val mergeConflictResolver: ConflictResolver = { conflict ->\n    val localDoc = conflict.localDocument?.toMap()?.toMutableMap()\n    val remoteDoc = conflict.remoteDocument?.toMap()?.toMutableMap()\n\n    val merge: MutableMap<String, Any?>?\n    if (localDoc == null) {\n        merge = remoteDoc\n    } else {\n        merge = localDoc\n        if (remoteDoc != null) {\n            merge.putAll(remoteDoc)\n        }\n    }\n\n    if (merge == null) {\n        MutableDocument(conflict.documentId)\n    } else {\n        MutableDocument(conflict.documentId, merge)\n    }\n}\nconfig.conflictResolver = mergeConflictResolver\n

        Just as a replicator may observe a conflict \u2014 when updating a document that has changed both in the local database and in a remote database \u2014 any attempt to save a document may also observe a conflict, if a replication has taken place since the local app retrieved the document from the database. To address that possibility, a version of the Database.save() method also takes a conflict resolver as shown in Example 13.

        The following code snippet shows an example of merging properties from the existing document (curDoc) into the one being saved (newDoc). In the event of conflicting keys, it will pick the key value from newDoc.

        Example 13. Merging document properties

        val mutableDocument = database.getDocument(\"xyz\")?.toMutable() ?: return\nmutableDocument.setString(\"name\", \"apples\")\ndatabase.save(mutableDocument) { newDoc, curDoc ->\n    if (curDoc == null) {\n        return@save false\n    }\n    val dataMap: MutableMap<String, Any?> = curDoc.toMap().toMutableMap()\n    dataMap.putAll(newDoc.toMap())\n    newDoc.setData(dataMap)\n    true\n}\n

        For more on replicator conflict resolution see Handling Data Conflicts.

        "},{"location":"active-peer/#delta-sync","title":"Delta Sync","text":"

        If delta sync is enabled on the listener, then replication will use delta sync.

        "},{"location":"blobs/","title":"Blobs","text":"

        Couchbase Lite database data model concepts \u2014 blobs

        "},{"location":"blobs/#introduction","title":"Introduction","text":"

        Couchbase Lite uses blobs to store the contents of images, other media files and similar format files as binary objects.

        The blob itself is not stored in the document. It is held in a separate content-addressable store indexed from the document and retrieved only on-demand.

        When a document is synchronized, the Couchbase Lite replicator adds an _attachments dictionary to the document\u2019s properties if it contains a blob \u2014 see Figure 1.

        "},{"location":"blobs/#blob-objects","title":"Blob Objects","text":"

        The blob as an object appears in a document as dictionary property \u2014 see, for example avatar in Figure 1.

        Other properties include length (the length in bytes), and optionally content_type (typically, its MIME type).

        The blob\u2019s data (an image, audio or video content) is not stored in the document, but in a separate content-addressable store, indexed by the digest property \u2014 see Using Blobs.

        "},{"location":"blobs/#constraints","title":"Constraints","text":"
        • Couchbase Lite Blobs can be arbitrarily large. They are only read on demand, not when you load a document.
        • Sync Gateway The maximum content size is 20 MB per blob. If a document\u2019s blob is over 20 MB, the document will be replicated but not the blob.
        "},{"location":"blobs/#using-blobs","title":"Using Blobs","text":"

        The Blob API lets you access the blob\u2019s data content as in-memory data (a ByteArray) or as a Source input stream.

        The code in Example 1 shows how you might add a blob to a document and save it to the database. Here we use avatar as the property key and a jpeg file as the blob data.

        Example 1. Working with blobs

        // kotlinx-io multiplatform file system APIs are still in development\n// However, platform-specific implementations can be created in the meantime\nexpect fun getAsset(file: String): Source?\n\nval mDoc = MutableDocument()\n\ngetAsset(\"avatar.jpg\")?.use { source ->\n  mDoc.setBlob(\"avatar\", Blob(\"image/jpeg\", source))\n  collection.save(mDoc)\n}\n\nval doc = collection.getDocument(mDoc.id)\nval bytes = doc?.getBlob(\"avatar\")?.content\n
        1. Prepare a document to use for the example.
        2. Create the blob using the retrieved image and set image/jpeg as the blob MIME type.
        3. Add the blob to a document, using avatar as the property key.
        4. Saving the document generates a random access key for each blob stored in digest a SHA-1 encrypted property \u2014 see Figure 1.
        5. Use the avatar key to retrieve the blob object later. Note, this is the identity of the blob assigned by us; the replication auto-generates a blob for attachments and assigns its own name to it (for example, blob_1) \u2014 see Figure 1. The digest key will be the same as generated when we saved the blob document.
        "},{"location":"blobs/#syncing","title":"Syncing","text":"

        When a document containing a blob object is synchronized, the Couchbase Lite replicator generates an _attachments dictionary with an auto-generated name for each blob attachment. This is different to the avatar key and is used internally to access the blob content.

        If you view a sync\u2019ed blob document in Couchbase Server Admin Console, you will see something similar to Figure 1, which shows the document with its generated _attachments dictionary, including the digest.

        Figure 1. Sample Blob Document"},{"location":"changelog/","title":"Change Log","text":""},{"location":"changelog/#313-110","title":"3.1.3-1.1.0","text":"

        1 Feb 2023

        • Scopes and Collections \u2014 Couchbase Lite 3.1 API (#11)
          • Android SDK v3.1.3
          • Java SDK v3.1.3
          • Objective-C SDK v3.1.4
          • C SDK v3.1.3
        • Update to Kotlin 1.9.22 (8546e4b)
        • Handle empty log domain set (00db837)
        • Source-incompatible change: Convert @Throws getter functions to properties (#12)
          • Database.getIndexes() -> Database.indexes
          • Replicator.getPendingDocumentIds() -> Replicator.pendingDocumentIds
        • Make Expression, as, and from query builder functions infix (#14)
        "},{"location":"changelog/#ktx-extensions","title":"KTX extensions:","text":"
        • Add Expression math operator functions (148399d)
        • Add fetchContext to documentFlow, default to Dispatchers.IO (2abe61a)
        • Add mutableArrayOf, mutableDictOf, and mutableDocOf, collection and doc creation functions (#13)
        • selectDistinct, from, as, and groupBy convenience query builder functions (#14)
        "},{"location":"changelog/#3015-101","title":"3.0.15-1.0.1","text":"

        15 Dec 2023

        • Make Replicator AutoCloseable (#2)
        • Avoid memory leaks with memScoped toFLString() (#3)
        • Update Couchbase Lite to 3.0.15 (#4):
          • Android SDK v3.0.15
          • Java SDK v3.0.15
          • Objective-C SDK v3.0.15
          • C SDK v3.0.15
        • Update to Kotlin 1.9.21 (#5)
        • K2 compiler compatibility (#7)
        • Update kotlinx-serialization, kotlinx-datetime, and kotlinx-atomicfu (#8)
        • Use default hierarchy template source set names (#9)
        "},{"location":"changelog/#3012-100","title":"3.0.12-1.0.0","text":"

        1 Nov 2023

        Initial public release

        Using Couchbase Lite:

        • Android SDK v3.0.12
        • Java SDK v3.0.12
        • Objective-C SDK v3.0.12
        • C SDK v3.0.12
        "},{"location":"community/","title":"Community","text":"

        Join the #couchbase channel of the Kotlin Slack.

        Browse the Couchbase Community Hub.

        Chat in the Couchbase Discord.

        Post in the Couchbase Forums.

        "},{"location":"databases/","title":"Databases","text":"

        Working with Couchbase Lite databases

        "},{"location":"databases/#database-concepts","title":"Database Concepts","text":"

        Databases created on Couchbase Lite can share the same hierarchical structure as Couchbase Server or Capella databases. This makes it easier to sync data between mobile applications and applications built using Couchbase Server or Capella.

        Figure 1. Couchbase Lite Database Hierarchy

        Although the terminology is different, the structure can be mapped to relational database terms:

        Table 1. Relational Database \u2192 Couchbase

        Relational database Couchbase Database Database Schema Scope Table Collection

        This structure gives you plenty of choices when it comes to partitioning your data. The most basic structure is to use the single default scope with a single default collection; or you could opt for a structure that allows you to split your collections into logical scopes.

        Figure 2. Couchbase Lite Examples

        Storing local configuration

        You may not need to sync all the data related for a particular application. You can set up a scope that syncs data, and a second scope that doesn\u2019t.

        One reason for doing this is to store local configuration data (such as the preferred screen orientation or keyboard layout). Since this information only relates to a particular device, there is no need to sync it:

        local data scope Contains information pertaining to the device. syncing data scope Contains information pertaining to the user, which can be synced back to the cloud for use on the web or another device.

        "},{"location":"databases/#create-or-open-database","title":"Create or Open Database","text":"

        You can create a new database and-or open an existing database, using the Database class. Just pass in a database name and optionally a DatabaseConfiguration \u2014 see Example 1.

        Things to watch for include:

        • If the named database does not exist in the specified, or default, location then a new one is created
        • The database is created in a default location unless you specify a directory for it \u2014 see DatabaseConfiguration and DatabaseConfiguration.setDirectory()

        Tip

        Best Practice is to always specify the path to the database explicitly.

        Typically, the default location is the application sandbox or current working directory.

        See also Finding a Database File.

        Example 1. Open or create a database

        val database = Database(\n    \"my-db\",\n    DatabaseConfigurationFactory.newConfig(\n        \"path/to/database\"\n    )\n)\n

        Tip

        \"path/to/database\" might be a platform-specific location. Use expect/actual or dependency injection to provide a platform-specific database path.

        "},{"location":"databases/#close-database","title":"Close Database","text":"

        You are advised to incorporate the closing of all open databases into your application workflow.

        Closing a database is simple, just use Database.close() \u2014 see Example 2. This also closes active replications, listeners and-or live queries connected to the database.

        Note

        Closing a database soon after starting a replication involving it can cause an exception as the asynchronous replicator (start) may not yet be connected.

        Example 2. Close a Database

        database.close()\n
        "},{"location":"databases/#database-encryption","title":"Database Encryption","text":"

        This is an Enterprise Edition feature.

        Kotbase includes the ability to encrypt Couchbase Lite databases. This allows mobile applications to secure the data at rest, when it is being stored on the device. The algorithm used to encrypt the database is 256-bit AES.

        "},{"location":"databases/#enabling","title":"Enabling","text":"

        To enable encryption, use DatabaseConfiguration.setEncryptionKey() to set the encryption key of your choice. Provide this encryption key every time the database is opened \u2014 see Example 3.

        Example 3. Configure Database Encryption

        val db = Database(\n    \"my-db\",\n    DatabaseConfigurationFactory.newConfig(\n        encryptionKey = EncryptionKey(\"PASSWORD\")\n    )\n)\n
        "},{"location":"databases/#persisting","title":"Persisting","text":"

        Couchbase Lite does not persist the key. It is the application\u2019s responsibility to manage the key and store it in a platform specific secure store such as Apple\u2019s Keychain or Android\u2019s Keystore.

        "},{"location":"databases/#opening","title":"Opening","text":"

        An encrypted database can only be opened with the same language SDK that was used to encrypt it in the first place. So a database encrypted with Kotbase on Android (which uses the Couchbase Lite Android SDK) and then exported, is readable only by Kotbase on Android or the Couchbase Lite Android SDK.

        "},{"location":"databases/#changing","title":"Changing","text":"

        To change an existing encryption key, open the database using its existing encryption-key and use Database.changeEncryptionKey() to set the required new encryption-key value.

        "},{"location":"databases/#removing","title":"Removing","text":"

        To remove encryption, open the database using its existing encryption-key and use Database.changeEncryptionKey() with a null value as the encryption key.

        "},{"location":"databases/#finding-a-database-file","title":"Finding a Database File","text":""},{"location":"databases/#android","title":"Android","text":"

        When the application is running on the Android emulator, you can locate the application\u2019s data folder and access the database file by using the adb CLI tools. For example, to list the different databases on the emulator, you can run the following commands.

        Example 4. List files

        $ adb shell\n$ su\n$ cd /data/data/{APPLICATION_ID}/files\n$ ls\n

        The adb pull command can be used to pull a specific database to your host machine.

        Example 5. Pull using adb command

        $ adb root\n$ adb pull /data/data/{APPLICATION_ID}/files/{DATABASE_NAME}.cblite2 .\n
        "},{"location":"databases/#ios","title":"iOS","text":"

        When the application is running on the iOS simulator, you can locate the application\u2019s sandbox directory using the OpenSim utility.

        "},{"location":"databases/#database-maintenance","title":"Database Maintenance","text":"

        From time to time it may be necessary to perform certain maintenance activities on your database, for example to compact the database file, removing unused documents and blobs no longer referenced by any documents.

        Couchbase Lite\u2019s API provides the Database.performMaintenance() method. The available maintenance operations, including compact are as shown in the enum MaintenanceType to accomplish this.

        This is a resource intensive operation and is not performed automatically. It should be run on-demand using the API. If in doubt, consult Couchbase support.

        "},{"location":"databases/#command-line-tool","title":"Command Line Tool","text":"

        cblite is a command-line tool for inspecting and querying Couchbase Lite databases.

        You can download and build it from the couchbaselabs GitHub repository.

        "},{"location":"databases/#troubleshooting","title":"Troubleshooting","text":"

        You should use console logs as your first source of diagnostic information. If the information in the default logging level is insufficient you can focus it on database errors and generate more verbose messages \u2014 see Example 6.

        For more on using Couchbase logs \u2014 see Using Logs.

        Example 6. Increase Level of Database Log Messages

        Database.log.console.domains = setOf(LogDomain.DATABASE) \n
        "},{"location":"differences/","title":"Differences from Java SDK","text":"

        Kotbase's API aligns with the Couchbase Lite Java and Android KTX SDKs. Migrating existing Kotlin code can be as straightforward as changing the import package from com.couchbase.lite to kotbase, with some exceptions:

        • Java callback functional interfaces are implemented as Kotlin function types.
        • File, URL, and URI APIs are represented as strings.
        • Date APIs use kotlinx-datetime's Instant.
        • InputStream APIs use kotlinx-io's Source.
        • Executor APIs use Kotlin's CoroutineContext.
        • Certificate APIs are available as raw ByteArrays or in platform-specific code.
        • There's no need to explicitly call CouchbaseLite.init(). Initialization functions can still be called with custom parameters in JVM and Android platform code.
        • Efforts have been made to detect and throw Kotlin exceptions for common error conditions, but NSError may still leak through on Apple platforms. Please report any occurrences that may deserve addressing.
        • Some deprecated APIs are omitted.
        • While not available in the Java SDK, as Java doesn't support operator overloading, Fragment subscript APIs are available in Kotbase, similar to Swift, Objective-C, and .NET.
        "},{"location":"documents/","title":"Documents","text":"

        Couchbase Lite concepts \u2014 Data model \u2014 Documents

        "},{"location":"documents/#overview","title":"Overview","text":""},{"location":"documents/#document-structure","title":"Document Structure","text":"

        In Couchbase Lite the term 'document' refers to an entry in the database. You can compare it to a record, or a row in a table.

        Each document has an ID or unique identifier. This ID is similar to a primary key in other databases.

        You can specify the ID programmatically. If you omit it, it will be automatically generated as a UUID.

        Note

        Couchbase documents are assigned to a Collection. The ID of a document must be unique within the Collection it is written to. You cannot change it after you have written the document.

        The document also has a value which contains the actual application data. This value is stored as a dictionary of key-value (k-v) pairs. The values can be made of up several different Data Types such as numbers, strings, arrays, and nested objects.

        "},{"location":"documents/#data-encoding","title":"Data Encoding","text":"

        The document body is stored in an internal, efficient, binary form called Fleece. This internal form can be easily converted into a manageable native dictionary format for manipulation in applications.

        Fleece data is stored in the smallest format that will hold the value whilst maintaining the integrity of the value.

        "},{"location":"documents/#data-types","title":"Data Types","text":"

        The Document class offers a set of property accessors for various scalar types, such as:

        • Boolean
        • Date
        • Double
        • Float
        • Int
        • Long
        • String

        These accessors take care of converting to/from JSON encoding, and make sure you get the type you expect.

        In addition to these basic data types Couchbase Lite provides for the following:

        • Dictionary represents a read-only key-value pair collection
        • MutableDictionary represents a writeable key-value pair collection
        • Array represents a readonly ordered collection of objects
        • MutableArray represents a writeable collection of objects
        • Blob represents an arbitrary piece of binary data
        "},{"location":"documents/#json","title":"JSON","text":"

        Couchbase Lite also provides for the direct handling of JSON data implemented in most cases by the provision of a toJSON() method on appropriate API classes (for example, on MutableDocument, Dictionary, Blob, and Array) \u2014 see Working with JSON Data.

        "},{"location":"documents/#constructing-a-document","title":"Constructing a Document","text":"

        An individual document often represents a single instance of an object in application code.

        You can consider a document as the equivalent of a 'row' in a relational table, with each of the document\u2019s attributes being equivalent to a 'column'.

        Documents can contain nested structures. This allows developers to express many-to-many relationships without requiring a reference or join table, and is naturally expressive of hierarchical data.

        Most apps will work with one or more documents, persisting them to a local database and optionally syncing them, either centrally or to the cloud.

        In this section we provide an example of how you might create a hotel document, which provides basic contact details and price data.

        Data Model
        hotel: {\n  type: string (value = `hotel`)\n  name: string\n  address: dictionary {\n    street: string\n    city: string\n    state: string\n    country: string\n    code: string\n  }\n  phones: array\n  rate: float\n}\n
        "},{"location":"documents/#open-a-database","title":"Open a Database","text":"

        First open your database. If the database does not already exist, Couchbase Lite will create it for you.

        Couchbase documents are assigned to a Collection. All the CRUD examples in this document operate on a collection object.

        // Get the database (and create it if it doesn\u2019t exist).\nval config = DatabaseConfiguration()\nconfig.directory = \"path/to/db\"\nval database = Database(\"getting-started\", config)\nval collection = database.getCollection(\"myCollection\")\n    ?: throw IllegalStateException(\"collection not found\")\n

        See Databases for more information

        "},{"location":"documents/#create-a-document","title":"Create a Document","text":"

        Now create a new document to hold your application\u2019s data.

        Use the mutable form, so that you can add data to the document.

        // Create your new document\nval mutableDoc = MutableDocument()\n

        For more on using Documents, see Document Initializers and Mutability.

        "},{"location":"documents/#create-a-dictionary","title":"Create a Dictionary","text":"

        Now create a mutable dictionary (address).

        Each element of the dictionary value will be directly accessible via its own key.

        // Create and populate mutable dictionary\n// Create a new mutable dictionary and populate some keys/values\nval address = MutableDictionary()\naddress.setString(\"street\", \"1 Main st.\")\naddress.setString(\"city\", \"San Francisco\")\naddress.setString(\"state\", \"CA\")\naddress.setString(\"country\", \"USA\")\naddress.setString(\"code\", \"90210\")\n

        Tip

        The Kotbase KTX extensions provide an idiomatic MutableDictionary creation function:

        val address = mutableDictOf(\n    \"street\" to \"1 Main st.\",\n    \"city\" to \"San Francisco\",\n    \"state\" to \"CA\",\n    \"country\" to \"USA\",\n    \"code\" to \"90210\"\n)\n

        Learn more about Using Dictionaries.

        "},{"location":"documents/#create-an-array","title":"Create an Array","text":"

        Since the hotel may have multiple contact numbers, provide a field (phones) as a mutable array.

        // Create and populate mutable array\nval phones = MutableArray()\nphones.addString(\"650-000-0000\")\nphones.addString(\"650-000-0001\")\n

        Tip

        The Kotbase KTX extensions provide an idiomatic MutableArray creation function:

        val phones = mutableArrayOf(\n    \"650-000-0000\",\n    \"650-000-0001\"\n)\n

        Learn more about Using Arrays.

        "},{"location":"documents/#populate-a-document","title":"Populate a Document","text":"

        Now add your data to the mutable document created earlier. Each data item is stored as a key-value pair.

        // Initialize and populate the document\n\n// Add document type to document properties \nmutableDoc.setString(\"type\", \"hotel\")\n\n// Add hotel name string to document properties \nmutableDoc.setString(\"name\", \"Hotel Java Mo\")\n\n// Add float to document properties \nmutableDoc.setFloat(\"room_rate\", 121.75f)\n\n// Add dictionary to document's properties \nmutableDoc.setDictionary(\"address\", address)\n\n// Add array to document's properties \nmutableDoc.setArray(\"phones\", phones)\n

        Note

        Couchbase recommends using a type attribute to define each logical document type.

        "},{"location":"documents/#save-a-document","title":"Save a Document","text":"

        Now persist the populated document to your Couchbase Lite database. This will auto-generate the document id.

        // Save the document changes \ncollection.save(mutableDoc)\n
        "},{"location":"documents/#close-the-database","title":"Close the Database","text":"

        With your document saved, you can now close our Couchbase Lite database.

        // Close the database \ndatabase.close()\n
        "},{"location":"documents/#working-with-data","title":"Working with Data","text":""},{"location":"documents/#checking-a-documents-properties","title":"Checking a Document\u2019s Properties","text":"

        To check whether a given property exists in the document, use the Document.contains(key: String) method.

        If you try to access a property which doesn\u2019t exist in the document, the call will return the default value for that getter method (0 for Document.getInt(), 0.0 for Document.getFloat(), etc.).

        "},{"location":"documents/#date-accessors","title":"Date accessors","text":"

        Couchbase Lite offers Date accessors as a convenience. Dates are a common data type, but JSON doesn\u2019t natively support them, so the convention is to store them as strings in ISO-8601 format.

        Example 1. Date Getter

        This example sets the date on the createdAt property and reads it back using the Document.getDate() accessor method.

        doc.setValue(\"createdAt\", Clock.System.now())\nval date = doc.getDate(\"createdAt\")\n
        "},{"location":"documents/#using-dictionaries","title":"Using Dictionaries","text":"

        API References

        • Dictionary
        • MutableDictionary

        Example 2. Read Only

        // NOTE: No error handling, for brevity (see getting started)\nval document = collection.getDocument(\"doc1\")\n\n// Getting a dictionary from the document's properties\nval dict = document?.getDictionary(\"address\")\n\n// Access a value with a key from the dictionary\nval street = dict?.getString(\"street\")\n\n// Iterate dictionary\ndict?.forEach { key ->\n    println(\"Key $key = ${dict.getValue(key)}\")\n}\n\n// Create a mutable copy\nval mutableDict = dict?.toMutable()\n

        Example 3. Mutable

        // NOTE: No error handling, for brevity (see getting started)\n\n// Create a new mutable dictionary and populate some keys/values\nval mutableDict = MutableDictionary()\nmutableDict.setString(\"street\", \"1 Main st.\")\nmutableDict.setString(\"city\", \"San Francisco\")\n\n// Add the dictionary to a document's properties and save the document\nval mutableDoc = MutableDocument(\"doc1\")\nmutableDoc.setDictionary(\"address\", mutableDict)\ncollection.save(mutableDoc)\n
        "},{"location":"documents/#using-arrays","title":"Using Arrays","text":"

        API References

        • Array
        • MutableArray

        Example 4. Read Only

        // NOTE: No error handling, for brevity (see getting started)\n\nval document = collection.getDocument(\"doc1\")\n\n// Getting a phones array from the document's properties\nval array = document?.getArray(\"phones\")\n\n// Get element count\nval count = array?.count\n\n// Access an array element by index\nval phone = array?.getString(1)\n\n// Iterate array\narray?.forEachIndexed { index, item ->\n    println(\"Row $index = $item\")\n}\n\n// Create a mutable copy\nval mutableArray = array?.toMutable()\n

        Example 5. Mutable

        // NOTE: No error handling, for brevity (see getting started)\n\n// Create a new mutable array and populate data into the array\nval mutableArray = MutableArray()\nmutableArray.addString(\"650-000-0000\")\nmutableArray.addString(\"650-000-0001\")\n\n// Set the array to document's properties and save the document\nval mutableDoc = MutableDocument(\"doc1\")\nmutableDoc.setArray(\"phones\", mutableArray)\ncollection.save(mutableDoc)\n
        "},{"location":"documents/#using-blobs","title":"Using Blobs","text":"

        For more on working with blobs, see Blobs.

        "},{"location":"documents/#document-initializers","title":"Document Initializers","text":"

        You can use the following methods/initializers:

        • Use the MutableDocument() initializer to create a new document where the document ID is randomly generated by the database.
        • Use the MutableDocument(id: String?) initializer to create a new document with a specific ID.
        • Use the Collection.getDocument() method to get a document. If the document doesn\u2019t exist in the collection, the method will return null. You can use this behavior to check if a document with a given ID already exists in the collection.

        Example 6. Persist a document

        val doc = MutableDocument()\ndoc.apply {\n    setString(\"type\", \"task\")\n    setString(\"owner\", \"todo\")\n    setDate(\"createdAt\", Clock.System.now())\n}\ncollection.save(doc)\n

        Tip

        The Kotbase KTX extensions provide a document builder DSL:

        val doc = MutableDocument {\n    \"type\" to \"task\"\n    \"owner\" to \"todo\"\n    \"createdAt\" to Clock.System.now()\n}\ndatabase.save(doc)\n
        "},{"location":"documents/#mutability","title":"Mutability","text":"

        By default, a document is immutable when it is read from the database. Use Document.toMutable() to create an updatable instance of the document.

        Example 7. Make a mutable document

        Changes to the document are persisted to the database when the save method is called.

        collection.getDocument(\"xyz\")?.toMutable()?.let {\n    it.setString(\"name\", \"apples\")\n    collection.save(it)\n}\n

        Note

        Any user change to the value of reserved keys (_id, _rev, or _deleted) will be detected when a document is saved and will result in an exception (Error Code 5 \u2014 CorruptRevisionData) \u2014 see also Document Constraints.

        "},{"location":"documents/#batch-operations","title":"Batch operations","text":"

        If you\u2019re making multiple changes to a database at once, it\u2019s faster to group them together. The following example persists a few documents in batch.

        Example 8. Batch operations

        database.inBatch {\n    for (i in 0..9) {\n        val doc = MutableDocument()\n        doc.apply {\n            setValue(\"type\", \"user\")\n            setValue(\"name\", \"user $i\")\n            setBoolean(\"admin\", false)\n        }\n        collection.save(doc)\n        println(\"saved user document: ${doc.getString(\"name\")}\")\n    }\n}\n

        At the local level this operation is still transactional: no other Database instances, including ones managed by the replicator, can make changes during the execution of the block, and other instances will not see partial changes. But Couchbase Mobile is a distributed system, and due to the way replication works, there\u2019s no guarantee that Sync Gateway or other devices will receive your changes all at once.

        "},{"location":"documents/#document-change-events","title":"Document change events","text":"

        You can register for document changes. The following example registers for changes to the document with ID user.john and prints the verified_account property when a change is detected.

        Example 9. Document change events

        collection.addDocumentChangeListener(\"user.john\") { change ->\n    collection.getDocument(change.documentID)?.let {\n        println(\"Status: ${it.getString(\"verified_account\")}\")\n    }\n}\n
        "},{"location":"documents/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

        Kotlin users can also take advantage of Flows to monitor for changes.

        The following methods show how to watch for document changes in a given collection or for changes to a specific document.

        Collection ChangesDocument Changes
        val collChanges: Flow<List<String>> = collection.collectionChangeFlow()\n    .map { it.documentIDs }\n
        val docChanges: Flow<DocumentChange> = collection.documentChangeFlow(\"1001\")\n    .mapNotNull { change ->\n        change.takeUnless {\n            collection.getDocument(it.documentID)?.getString(\"owner\").equals(owner)\n        }\n    }\n
        "},{"location":"documents/#document-expiration","title":"Document Expiration","text":"

        Document expiration allows users to set the expiration date for a document. When the document expires, it is purged from the database. The purge is not replicated to Sync Gateway.

        Example 10. Set document expiration

        This example sets the TTL for a document to 1 day from the current time.

        // Purge the document one day from now\ncollection.setDocumentExpiration(\n    \"doc123\",\n    Clock.System.now() + 1.days\n)\n\n// Reset expiration\ncollection.setDocumentExpiration(\"doc1\", null)\n\n// Query documents that will be expired in less than five minutes\nval query = QueryBuilder\n    .select(SelectResult.expression(Meta.id))\n    .from(DataSource.collection(collection))\n    .where(\n        Meta.expiration.lessThan(\n            Expression.longValue((Clock.System.now() + 5.minutes).toEpochMilliseconds())\n        )\n    )\n
        "},{"location":"documents/#document-constraints","title":"Document Constraints","text":"

        Couchbase Lite APIs do not explicitly disallow the use of attributes with the underscore prefix at the top level of document. This is to facilitate the creation of documents for use either in local only mode where documents are not synced, or when used exclusively in peer-to-peer sync.

        Note

        \"_id\", :\"_rev\" and \"_sequence\" are reserved keywords and must not be used as top-level attributes \u2014 see Example 11.

        Users are cautioned that any attempt to sync such documents to Sync Gateway will result in an error. To be future-proof, you are advised to avoid creating such documents. Use of these attributes for user-level data may result in undefined system behavior.

        For more guidance \u2014 see Sync Gateway - data modeling guidelines

        Example 11. Reserved Keys List

        • _attachments
        • _deleted 1
        • _id 1
        • _removed
        • _rev 1
        • _sequence
        "},{"location":"documents/#working-with-json-data","title":"Working with JSON Data","text":"

        In this section Arrays | Blobs | Dictionaries | Documents | Query Results as JSON

        The toJSON() typed-accessor means you can easily work with JSON data, native and Couchbase Lite objects.

        "},{"location":"documents/#arrays","title":"Arrays","text":"

        Convert an Array to and from JSON using the toJSON() and toList() methods \u2014 see Example 12.

        Additionally, you can:

        • Initialize a MutableArray using data supplied as a JSON string. This is done using the MutableArray(json: String) constructor \u2014 see Example 12.
        • Set data with a JSON string using setJSON().

        Example 12. Arrays as JSON strings

        // JSON String -- an Array (3 elements. including embedded arrays)\nval jsonString = \"\"\"[{\"id\":\"1000\",\"type\":\"hotel\",\"name\":\"Hotel Ted\",\"city\":\"Paris\",\"country\":\"France\",\"description\":\"Undefined description for Hotel Ted\"},{\"id\":\"1001\",\"type\":\"hotel\",\"name\":\"Hotel Fred\",\"city\":\"London\",\"country\":\"England\",\"description\":\"Undefined description for Hotel Fred\"},{\"id\":\"1002\",\"type\":\"hotel\",\"name\":\"Hotel Ned\",\"city\":\"Balmain\",\"country\":\"Australia\",\"description\":\"Undefined description for Hotel Ned\",\"features\":[\"Cable TV\",\"Toaster\",\"Microwave\"]}]\"\"\"\n\n// initialize array from JSON string\nval mArray = MutableArray(jsonString)\n\n// Create and save new document using the array\nfor (i in 0 ..< mArray.count) {\n    mArray.getDictionary(i)?.apply {\n        println(getString(\"name\") ?: \"unknown\")\n        collection.save(MutableDocument(getString(\"id\"), toMap()))\n    }\n}\n\n// Get an array from the document as a JSON string\ncollection.getDocument(\"1002\")?.getArray(\"features\")?.apply {\n    // Print its elements\n    for (feature in toList()) {\n        println(\"$feature\")\n    }\n    println(toJSON())\n}\n
        "},{"location":"documents/#blobs","title":"Blobs","text":"

        Convert a Blob to JSON using the toJSON() method \u2014 see Example 13.

        You can use isBlob() to check whether a given dictionary object is a blob or not \u2014 see Example 13.

        Note that the blob object must first be saved to the database (generating the required metadata) before you can use the toJSON() method.

        Example 13. Blobs as JSON strings

        val thisBlob = collection.getDocument(\"thisdoc-id\")!!.toMap()\nif (!Blob.isBlob(thisBlob)) {\n    return\n}\nval blobType = thisBlob[\"content_type\"].toString()\nval blobLength = thisBlob[\"length\"] as Number?\n

        See also: Blobs

        "},{"location":"documents/#dictionaries","title":"Dictionaries","text":"

        Convert a Dictionary to and from JSON using the toJSON() and toMap() methods \u2014 see Example 14.

        Additionally, you can:

        • Initialize a MutableDictionary using data supplied as a JSON string. This is done using the MutableDictionary(json: String) constructor \u2014 see Example 14.
        • Set data with a JSON string using setJSON().

        Example 14. Dictionaries as JSON strings

        val jsonString = \"\"\"{\"id\":\"1002\",\"type\":\"hotel\",\"name\":\"Hotel Ned\",\"city\":\"Balmain\",\"country\":\"Australia\",\"description\":\"Undefined description for Hotel Ned\",\"features\":[\"Cable TV\",\"Toaster\",\"Microwave\"]}\"\"\"\n\nval mDict = MutableDictionary(jsonString)\nprintln(\"$mDict\")\nprintln(\"Details for: ${mDict.getString(\"name\")}\")\nmDict.forEach { key ->\n    println(key + \" => \" + mDict.getValue(key))\n}\n
        "},{"location":"documents/#documents","title":"Documents","text":"

        Convert a Document to and from JSON strings using the toJSON() and toMap() methods \u2014 see Example 15.

        Additionally, you can:

        • Initialize a MutableDocument using data supplied as a JSON string. This is done using the MutableDocument(id: String?, json: String) constructor \u2014 see Example 15.
        • Set data with a JSON string using setJSON().

        Example 15. Documents as JSON strings

        QueryBuilder\n    .select(SelectResult.expression(Meta.id).`as`(\"metaId\"))\n    .from(DataSource.collection(srcColl))\n    .execute()\n    .forEach {\n        it.getString(\"metaId\")?.let { thisId ->\n            srcColl.getDocument(thisId)?.toJSON()?.let { json ->\n                println(\"JSON String = $json\")\n                val hotelFromJSON = MutableDocument(thisId, json)\n                dstColl.save(hotelFromJSON)\n                dstColl.getDocument(thisId)?.toMap()?.forEach { e ->\n                    println(\"${e.key} => ${e.value}\")\n                }\n            }\n        }\n    }\n
        "},{"location":"documents/#query-results-as-json","title":"Query Results as JSON","text":"

        Convert a query Result to a JSON string using its toJSON() accessor method. The JSON string can easily be serialized or used as required in your application. See Example 16 for a working example using kotlinx-serialization.

        Example 16. Using JSON Results

        // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval hotels = mutableListOf<Hotel>()\n\nval query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"type\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n\nquery.execute().use { rs ->\n    rs.forEach {\n\n        // Get result as JSON string\n        val json = it.toJSON()\n\n        // Get JsonObject map from JSON string\n        val mapFromJsonString = Json.decodeFromString<JsonObject>(json)\n\n        // Use created JsonObject map\n        val hotelId = mapFromJsonString[\"id\"].toString()\n        val hotelType = mapFromJsonString[\"type\"].toString()\n        val hotelName = mapFromJsonString[\"name\"].toString()\n\n        // Get custom object from JSON string\n        val hotel = Json.decodeFromString<Hotel>(json)\n        hotels.add(hotel)\n    }\n}\n
        "},{"location":"documents/#json-string-format","title":"JSON String Format","text":"

        If your query selects ALL then the JSON format will be:

        {\n  database-name: {\n    key1: \"value1\",\n    keyx: \"valuex\"\n  }\n}\n

        If your query selects a sub-set of available properties then the JSON format will be:

        {\n  key1: \"value1\",\n  keyx: \"valuex\"\n}\n
        1. Any change to this reserved key will be detected when it is saved and will result in a Couchbase exception (Error Code 5 \u2014 CorruptRevisionData)\u00a0\u21a9\u21a9\u21a9

        "},{"location":"full-text-search/","title":"Full Text Search","text":"

        Couchbase Lite database data querying concepts \u2014 full text search

        "},{"location":"full-text-search/#overview","title":"Overview","text":"

        To run a full-text search (FTS) query, you must create a full-text index on the expression being matched. Unlike regular queries, the index is not optional.

        You can choose to use SQL++ or QueryBuilder syntaxes to create and use FTS indexes.

        The following examples use the data model introduced in Indexing. They create and use an FTS index built from the hotel\u2019s overview text.

        "},{"location":"full-text-search/#sql","title":"SQL++","text":""},{"location":"full-text-search/#create-index","title":"Create Index","text":"

        SQL++ provides a configuration object to define Full Text Search indexes \u2014 FullTextIndexConfiguration.

        Example 1. Using SQL++'s FullTextIndexConfiguration

        collection.createIndex(\n    \"overviewFTSIndex\",\n    FullTextIndexConfiguration(\"overview\")\n)\n
        "},{"location":"full-text-search/#use-index","title":"Use Index","text":"

        Full-text search is enabled using the SQL++ match() function.

        With the index created, you can construct and run a full-text search (FTS) query using the indexed properties.

        The index will omit a set of common words, to avoid words like \"I\", \"the\", and \"an\" from overly influencing your queries. See full list of these stop words.

        The following example finds all hotels mentioning Michigan in their overview text.

        Example 2. Using SQL++ Full Text Search

        val ftsQuery = database.createQuery(\n    \"SELECT _id, overview FROM _ WHERE MATCH(overviewFTSIndex, 'michigan') ORDER BY RANK(overviewFTSIndex)\"\n)\nftsQuery.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"${it.getString(\"id\")}: ${it.getString(\"overview\")}\")\n    }\n}\n
        "},{"location":"full-text-search/#querybuilder","title":"QueryBuilder","text":""},{"location":"full-text-search/#create-index_1","title":"Create Index","text":"

        The following example creates an FTS index on the overview property.

        Example 3. Using the IndexBuilder method

        collection.createIndex(\n    \"overviewFTSIndex\",\n    IndexBuilder.fullTextIndex(FullTextIndexItem.property(\"overview\"))\n)\n
        "},{"location":"full-text-search/#use-index_1","title":"Use Index","text":"

        With the index created, you can construct and run a full-text search (FTS) query using the indexed properties.

        The following example finds all hotels mentioning Michigan in their overview text.

        Example 4. Using QueryBuilder Full Text Search

        val ftsQuery = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"overview\")\n    )\n    .from(DataSource.collection(collection))\n    .where(FullTextFunction.match(\"overviewFTSIndex\", \"michigan\"))\n\nftsQuery.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"${it.getString(\"Meta.id\")}: ${it.getString(\"overview\")}\")\n    }\n}\n
        "},{"location":"full-text-search/#operation","title":"Operation","text":"

        In the examples above, the pattern to match is a word, the full-text search query matches all documents that contain the word \"michigan\" in the value of the doc.overview property.

        Search is supported for all languages that use whitespace to separate words.

        Stemming, which is the process of fuzzy matching parts of speech, like \"fast\" and \"faster\", is supported in the following languages: Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish and Turkish.

        "},{"location":"full-text-search/#pattern-matching-formats","title":"Pattern Matching Formats","text":"

        As well as providing specific words or strings to match against, you can provide the pattern to match in these formats.

        "},{"location":"full-text-search/#prefix-queries","title":"Prefix Queries","text":"

        The query expression used to search for a term prefix is the prefix itself with a \"*\" character appended to it.

        Example 5. Prefix query

        Query for all documents containing a term with the prefix \"lin\".

        \"lin*\"\n

        This will match:

        • All documents that contain \"linux\"
        • And \u2026 those that contain terms \"linear\", \"linker\", \"linguistic\", and so on.
        "},{"location":"full-text-search/#overriding-the-property-name","title":"Overriding the Property Name","text":"

        Normally, a token or token prefix query is matched against the document property specified as the left-hand side of the match operator. This may be overridden by specifying a property name followed by a \":\" character before a basic term query. There may be space between the \":\" and the term to query for, but not between the property name and the \":\" character.

        Example 6. Override indexed property name

        Query the database for documents for which the term \"linux\" appears in the document title, and the term \"problems\" appears in either the title or body of the document.

        'title:linux problems'\n
        "},{"location":"full-text-search/#phrase-queries","title":"Phrase Queries","text":"

        A phrase query is one that retrieves all documents containing a nominated set of terms or term prefixes in a specified order with no intervening tokens.

        Phrase queries are specified by enclosing a space separated sequence of terms or term prefixes in double quotes (\").

        Example 7. Phrase query

        Query for all documents that contain the phrase \"linux applications\".

        \"linux applications\"\n
        "},{"location":"full-text-search/#near-queries","title":"NEAR Queries","text":"

        A NEAR query is a query that returns documents that contain two or more nominated terms or phrases within a specified proximity of each other (by default with 10 or less intervening terms). A NEAR query is specified by putting the keyword \"NEAR\" between two phrases, tokens or token prefix queries. To specify a proximity other than the default, an operator of the form \"NEAR/<number>\" may be used, where <number> is the maximum number of intervening terms allowed.

        Example 8. Near query

        Search for a document that contains the phrase \"replication\" and the term \"database\" with not more than 2 terms separating the two.

        \"database NEAR/2 replication\"\n
        "},{"location":"full-text-search/#and-or-not-query-operators","title":"AND, OR & NOT Query Operators","text":"

        The enhanced query syntax supports the AND, OR and NOT binary set operators. Each of the two operands to an operator may be a basic FTS query, or the result of another AND, OR or NOT set operation. Operators must be entered using capital letters. Otherwise, they are interpreted as basic term queries instead of set operators.

        Example 9. Using And, Or and Not

        Return the set of documents that contain the term \"couchbase\", and the term \"database\".

        \"couchbase AND database\"\n
        "},{"location":"full-text-search/#operator-precedence","title":"Operator Precedence","text":"

        When using the enhanced query syntax, parenthesis may be used to specify the precedence of the various operators.

        Example 10. Operator precedence

        Query for the set of documents that contains the term \"linux\", and at least one of the phrases \"couchbase database\" and \"sqlite library\".

        '(\"couchbase database\" OR \"sqlite library\") AND \"linux\"'\n
        "},{"location":"full-text-search/#ordering-results","title":"Ordering Results","text":"

        It\u2019s very common to sort full-text results in descending order of relevance. This can be a very difficult heuristic to define, but Couchbase Lite comes with a ranking function you can use.

        In the OrderBy array, use a string of the form Rank(X), where X is the property or expression being searched, to represent the ranking of the result.

        "},{"location":"getting-started/","title":"Build and Run","text":"

        Build and run a starter app using Kotbase

        "},{"location":"getting-started/#introduction","title":"Introduction","text":"

        The Getting Started app is a very basic Kotlin Multiplatform app that demonstrates using Kotbase in a shared Kotlin module with native apps on each of the supported platforms.

        You can access the getting-started and getting-started-compose projects in the git repository under examples.

        Quick Steps

        1. Get the project and open it in Android Studio
        2. Build it
        3. Run any of the platform apps
        4. Enter some input and press \"Run database work\" The log output, in the app's UI or console panel, will show output similar to that in Figure 1
        5. That\u2019s it.

        Figure 1: Example app output

        01-13 11:35:03.733 I/SHARED_KOTLIN: Database created: Database{@@0x9645222: 'desktopApp-db'}\n01-13 11:35:03.742 I/SHARED_KOTLIN: Collection created: desktopApp-db@@x7fba7630dcb0._default.example-coll\n01-13 11:35:03.764 I/DESKTOP_APP: Created document :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.767 I/SHARED_KOTLIN: Retrieved document:\n01-13 11:35:03.767 I/SHARED_KOTLIN: Document ID :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.767 I/SHARED_KOTLIN: Learning :: Kotlin\n01-13 11:35:03.768 I/DESKTOP_APP: Updated document :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.785 I/SHARED_KOTLIN: Number of rows :: 1\n01-13 11:35:03.789 I/SHARED_KOTLIN: Document ID :: 83b6acb4-21ba-4834-aee4-2419dcea1114\n01-13 11:35:03.790 I/SHARED_KOTLIN: Document :: {\"language\":\"Kotlin\",\"version\":2.0,\"platform\":\"JVM 21.0.1\",\"input\":\"Hello, Kotbase!\"}\n
        "},{"location":"getting-started/#getting-started-app","title":"Getting Started App","text":"

        The Getting Started app shows examples of the essential Couchbase Lite CRUD operations, including:

        • Create a database
        • Create a collection
        • Create a document
        • Retrieve a document
        • Update a document
        • Query documents
        • Create and run a replicator

        Whilst no exemplar of a real application, it will give you a good idea how to get started using Kotbase and Kotlin Multiplatform.

        "},{"location":"getting-started/#shared-kotlin-native-ui","title":"Shared Kotlin + Native UI","text":"

        The getting-started version demonstrates using shared Kotlin code using Kotbase together with native app UIs.

        The Kotbase database examples are in the shared module, which is shared between each of the platform apps.

        "},{"location":"getting-started/#android-app","title":"Android App","text":"

        The Android app is in the androidApp module. It uses XML views for its UI.

        Run Android StudioCommand Line

        Run the androidApp run configuration.

        Install

        ./gradlew :androidApp:installDebug\n
        Run
        adb shell am start -n dev.kotbase.gettingstarted/.MainActivity\n

        "},{"location":"getting-started/#ios-app","title":"iOS App","text":"

        The iOS app is in the iosApp directory. It is an Xcode project and uses SwiftUI for its UI.

        Run Android StudioXcode

        With the Kotlin Multiplatform Mobile plugin run the iosApp run configuration.

        Open iosApp/iosApp.xcodeproj and run the iosApp scheme.

        "},{"location":"getting-started/#jvm-desktop-app","title":"JVM Desktop App","text":"

        The JVM desktop app is in the desktopApp module. It uses Compose UI for its UI.

        Run Android StudioCommand Line

        Run the desktopApp run configuration.

        ./gradlew :desktopApp:run\n
        "},{"location":"getting-started/#native-cli-app","title":"Native CLI App","text":"

        The native app is in the cliApp module. It uses a command-line interface (CLI) on macOS, Linux, and Windows.

        The app takes two command-line arguments, first the \"input\" value, written to the document on update, and second true or false for whether to run the replicator. These arguments can also be passed as gradle properties.

        Run Android StudioCommand Line

        Run the cliApp run configuration.

        ./gradlew :cliApp:runDebugExecutableNative -PinputValue=\"\" -Preplicate=false\n
        or Build
        ./gradlew :cliApp:linkDebugExecutableNative\n
        Run
        cliApp/build/bin/native/debugExecutable/cliApp.kexe \"<input value>\" <true|false>\n

        "},{"location":"getting-started/#share-everything-in-kotlin","title":"Share Everything in Kotlin","text":"

        The getting-started-compose version demonstrates sharing the entirety of the application code in Kotlin, including the UI with Compose Multiplatform.

        The entire compose app is a single Kotlin multiplatform module, encompassing all platforms, with an additional Xcode project for the iOS app.

        "},{"location":"getting-started/#android-app_1","title":"Android App","text":"Run Android StudioCommand Line

        Run the androidApp run configuration.

        Install

        ./gradlew :composeApp:installDebug\n
        Start
        adb shell am start -n dev.kotbase.gettingstarted.compose/.MainActivity\n

        "},{"location":"getting-started/#ios-app_1","title":"iOS App","text":"Run Android StudioXcode

        With the Kotlin Multiplatform Mobile plugin run the iosApp run configuration.

        Open iosApp/iosApp.xcworkspace and run the iosApp scheme.

        Important

        Be sure to open iosApp.xcworkspace and not iosApp.xcodeproj. The getting-started-compose iosApp uses CocoaPods and the CocoaPods Gradle plugin to add the shared library dependency. The .xcworkspace includes the CocoaPods dependencies.

        Note

        Compose Multiplatform no longer requires CocoaPods for copying resources since version 1.5.0. However, the getting-started-compose example still uses CocoaPods for linking the Couchbase Lite framework. See the getting-started version for an example of how to link the Couchbase Lite framework without using CocoaPods.

        "},{"location":"getting-started/#jvm-desktop-app_1","title":"JVM Desktop App","text":"Run Android StudioCommand Line

        Run the desktopApp run configuration.

        ./gradlew :composeApp:run\n
        "},{"location":"getting-started/#sync-gateway-replication","title":"Sync Gateway Replication","text":"

        Using the apps with Sync Gateway and Couchbase Server obviously requires you have, or install, working versions of both. See also \u2014 Install Sync Gateway

        Once you have Sync Gateway configured, update the ReplicatorConfiguration in the app with the server's URL endpoint and authentication credentials.

        "},{"location":"getting-started/#kotlin-multiplatform-tips","title":"Kotlin Multiplatform Tips","text":""},{"location":"getting-started/#calling-platform-specific-apis","title":"Calling Platform-specific APIs","text":"

        The apps utilize the Kotlin Multiplatform expect/actual feature to populate the created document with the platform the app is running on.

        See common expect fun getPlatform() and actual fun getPlatform() for Android, iOS, JVM, Linux, macOS, and Windows.

        "},{"location":"getting-started/#using-coroutines-in-swift","title":"Using Coroutines in Swift","text":"

        The getting-started app uses KMP-NativeCoroutines to consume Kotlin Flows in Swift. See @NativeCoroutines annotation in Kotlin and asyncSequence(for:) in Swift code.

        "},{"location":"getting-started/#kotbase-library-source","title":"Kotbase Library Source","text":"

        The apps can get the Kotbase library dependency either from its published Maven artifact or build the library locally from the source repository. Set the useLocalLib property in gradle.properties to true to build the library from source, otherwise the published artifact from Maven Central will be used.

        "},{"location":"handling-data-conflicts/","title":"Handling Data Conflicts","text":"

        Couchbase Lite Database Sync \u2014 handling conflict between data changes

        "},{"location":"handling-data-conflicts/#causes-of-conflicts","title":"Causes of Conflicts","text":"

        Document conflicts can occur if multiple changes are made to the same version of a document by multiple peers in a distributed system. For Couchbase Mobile, this can be a Couchbase Lite or Sync Gateway database instance.

        Such conflicts can occur after either of the following events:

        • A replication saves a document change \u2014 in which case the change with the most-revisions wins (unless one change is a delete). See Conflicts when Replicating
        • An application saves a document change directly to a database instance \u2014 in which case, last write wins, unless one change is a delete \u2014 see Conflicts when Updating

        Note

        Deletes always win. So, in either of the above cases, if one of the changes was a delete then that change wins.

        The following sections discuss each scenario in more detail.

        Dive deeper \u2026

        Read more about Document Conflicts and Automatic Conflict Resolution in Couchbase Mobile.

        "},{"location":"handling-data-conflicts/#conflicts-when-replicating","title":"Conflicts when Replicating","text":"

        There\u2019s no practical way to prevent a conflict when incompatible changes to a document are be made in multiple instances of an app. The conflict is realized only when replication propagates the incompatible changes to each other.

        Example 1. A typical replication conflict scenario

        1. Molly uses her device to create DocumentA.
        2. Replication syncs DocumentA to Naomi\u2019s device.
        3. Molly uses her device to apply ChangeX to DocumentA.
        4. Naomi uses her device to make a different change, ChangeY, to DocumentA.
        5. Replication syncs ChangeY to Molly\u2019s device. This device already has ChangeX putting the local document in conflict.
        6. Replication syncs ChangeX to Naomi\u2019s device. This device already has ChangeY and now Naomi\u2019s local document is in conflict.
        "},{"location":"handling-data-conflicts/#automatic-conflict-resolution","title":"Automatic Conflict Resolution","text":"

        Note

        The rules only apply to conflicts caused by replication. Conflict resolution takes place exclusively during pull replication, while push replication remains unaffected.

        Couchbase Lite uses the following rules to handle conflicts such as those described in A typical replication conflict scenario:

        • If one of the changes is a deletion: A deleted document (that is, a tombstone) always wins over a document update.
        • If both changes are document changes: The change with the most revisions will win. Since each change creates a revision with an ID prefixed by an incremented version number, the winner is the change with the highest version number.

        The result is saved internally by the Couchbase Lite replicator. Those rules describe the internal behavior of the replicator. For additional control over the handling of conflicts, including when a replication is in progress, see Custom Conflict Resolution.

        "},{"location":"handling-data-conflicts/#custom-conflict-resolution","title":"Custom Conflict Resolution","text":"

        Starting in Couchbase Lite 2.6, application developers who want more control over how document conflicts are handled can use custom logic to select the winner between conflicting revisions of a document.

        If a custom conflict resolver is not provided, the system will automatically resolve conflicts as discussed in Automatic Conflict Resolution, and as a consequence there will be no conflicting revisions in the database.

        Caution

        While this is true of any user defined functions, app developers must be strongly cautioned against writing sub-optimal custom conflict handlers that are time consuming and could slow down the client\u2019s save operations.

        To implement custom conflict resolution during replication, you must implement the following steps:

        1. Conflict Resolver
        2. Configure the Replicator
        "},{"location":"handling-data-conflicts/#conflict-resolver","title":"Conflict Resolver","text":"

        Apps have the following strategies for resolving conflicts:

        • Local Wins: The current revision in the database wins.
        • Remote Wins: The revision pulled from the remote endpoint through replication wins.
        • Merge: Merge the content bodies of the conflicting revisions.

        Example 2. Using conflict resolvers

        Local WinsRemote WinsMerge
        val localWinsResolver: ConflictResolver = { conflict ->\n    conflict.localDocument\n}\nconfig.conflictResolver = localWinsResolver\n
        val remoteWinsResolver: ConflictResolver = { conflict ->\n    conflict.remoteDocument\n}\nconfig.conflictResolver = remoteWinsResolver\n
        val mergeConflictResolver: ConflictResolver = { conflict ->\n    val localDoc = conflict.localDocument?.toMap()?.toMutableMap()\n    val remoteDoc = conflict.remoteDocument?.toMap()?.toMutableMap()\n\n    val merge: MutableMap<String, Any?>?\n    if (localDoc == null) {\n        merge = remoteDoc\n    } else {\n        merge = localDoc\n        if (remoteDoc != null) {\n            merge.putAll(remoteDoc)\n        }\n    }\n\n    if (merge == null) {\n        MutableDocument(conflict.documentId)\n    } else {\n        MutableDocument(conflict.documentId, merge)\n    }\n}\nconfig.conflictResolver = mergeConflictResolver\n

        When a null document is returned by the resolver, the conflict will be resolved as a document deletion.

        "},{"location":"handling-data-conflicts/#important-guidelines-and-best-practices","title":"Important Guidelines and Best Practices","text":"

        Points of Note:

        • If you have multiple replicators, it is recommended that instead of distinct resolvers, you should use a unified conflict resolver across all replicators. Failure to do so could potentially lead to data loss under exception cases or if the app is terminated (by the user or an app crash) while there are pending conflicts.
        • If the document ID of the document returned by the resolver does not correspond to the document that is in conflict then the replicator will log a warning message.

        Important

        Developers are encouraged to review the warnings and fix the resolver to return a valid document ID.

        • If a document from a different database is returned, the replicator will treat it as an error. A document replication event will be posted with an error and an error message will be logged.

        Important

        Apps are encouraged to observe such errors and take appropriate measures to fix the resolver function.

        • When the replicator is stopped, the system will attempt to resolve outstanding and pending conflicts before stopping. Hence, apps should expect to see some delay when attempting to stop the replicator depending on the number of outstanding documents in the replication queue and the complexity of the resolver function.
        • If there is an exception thrown in the ConflictResolver function, the exception will be caught and handled:
          • The conflict to resolve will be skipped. The pending conflicted documents will be resolved when the replicator is restarted.
          • The exception will be reported in the warning logs.
          • The exception will be reported in the document replication event.

        Important

        While the system will handle exceptions in the manner specified above, it is strongly encouraged for the resolver function to catch exceptions and handle them in a way appropriate to their needs.

        "},{"location":"handling-data-conflicts/#configure-the-replicator","title":"Configure the Replicator","text":"

        The implemented custom conflict resolver can be registered on the ReplicatorConfiguration object. The default value of the conflictResolver is null. When the value is null, the default conflict resolution will be applied.

        Example 3. A Conflict Resolver

        val collectionConfig = CollectionConfigurationFactory.newConfig(conflictResolver = localWinsResolver)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(srcCollections to collectionConfig)\n    )\n)\n\n// Start the replicator\n// (be sure to hold a reference somewhere that will prevent it from being GCed)\nrepl.start()\nthis.replicator = repl\n
        "},{"location":"handling-data-conflicts/#conflicts-when-updating","title":"Conflicts when Updating","text":"

        When updating a document, you need to consider the possibility of update conflicts. Update conflicts can occur when you try to update a document that\u2019s been updated since you read it.

        Example 4. How Updating May Cause Conflicts

        Here\u2019s a typical sequence of events that would create an update conflict:

        1. Your code reads the document\u2019s current properties, and constructs a modified copy to save.
        2. Another thread (perhaps the replicator) updates the document, creating a new revision with different properties.
        3. Your code updates the document with its modified properties, for example using Collection.save(MutableDocument).
        "},{"location":"handling-data-conflicts/#automatic-conflict-resolution_1","title":"Automatic Conflict Resolution","text":"

        In Couchbase Lite, by default, the conflict is automatically resolved and only one document update is stored in the database. The Last-Write-Win (LWW) algorithm is used to pick the winning update. So in effect, the changes from step 2 would be overwritten and lost.

        If the probability of update conflicts is high in your app, and you wish to avoid the possibility of overwritten data, the save() and delete() APIs provide additional method signatures with concurrency control:

        Save operations

        Collection.save(MutableDocument, ConcurrencyControl) \u2014 attempts to save the document with a concurrency control.

        The ConcurrencyControl parameter has two possible values:

        • LAST_WRITE_WINS (default): The last operation wins if there is a conflict.
        • FAIL_ON_CONFLICT: The operation will fail if there is a conflict. In this case, the app can detect the error that is being thrown, and handle it by re-reading the document, making the necessary conflict resolution, then trying again.

        Delete operations

        As with save operations, delete operations also have two method signatures, which specify how to handle a possible conflict:

        • Collection.delete(Document): The last write will win if there is a conflict.
        • Collection.delete(Document, ConcurrencyControl): attempts to delete the document with a concurrency control, with the same options described above.
        "},{"location":"handling-data-conflicts/#custom-conflict-handlers","title":"Custom Conflict Handlers","text":"

        Developers can hook a conflict handler when saving a document, so they can easily handle the conflict in a single save method call.

        To implement custom conflict resolution when saving a document, apps must call the save method with a conflict handler block (Collection.save(MutableDocument, ConflictHandler)).

        The following code snippet shows an example of merging properties from the existing document (curDoc) into the one being saved (newDoc). In the event of conflicting keys, it will pick the key value from newDoc.

        Example 5. Merging document properties

        val mutableDocument = collection.getDocument(\"xyz\")?.toMutable() ?: return\nmutableDocument.setString(\"name\", \"apples\")\ncollection.save(mutableDocument) { newDoc, curDoc ->\n    if (curDoc == null) {\n        return@save false\n    }\n    val dataMap: MutableMap<String, Any?> = curDoc.toMap().toMutableMap()\n    dataMap.putAll(newDoc.toMap())\n    newDoc.setData(dataMap)\n    true\n}\n
        "},{"location":"indexing/","title":"Indexing","text":"

        Couchbase Lite database data model concepts - indexes

        "},{"location":"indexing/#introduction","title":"Introduction","text":"

        Querying documents using a pre-existing database index is much faster because an index narrows down the set of documents to examine \u2014 see the Query Troubleshooting topic.

        When planning the indexes you need for your database, remember that while indexes make queries faster, they may also:

        • Make writes slightly slower, because each index must be updated whenever a document is updated
        • Make your Couchbase Lite database slightly larger

        Too many indexes may hurt performance. Optimal performance depends on designing and creating the right indexes to go along with your queries.

        Constraints

        Couchbase Lite does not currently support partial value indexes; indexes with non-property expressions. You should only index with properties that you plan to use in the query.

        "},{"location":"indexing/#creating-a-new-index","title":"Creating a new index","text":"

        You can use SQL++ or QueryBuilder syntaxes to create an index.

        Example 2 creates a new index for the type and name properties, shown in this data model:

        Example 1. Data Model

        {\n    \"_id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"The Michigander\",\n    \"overview\": \"Ideally situated for exploration of the Motor City and the wider state of Michigan. Tripadvisor rated the hotel ...\",\n    \"state\": \"Michigan\"\n}\n
        "},{"location":"indexing/#sql","title":"SQL++","text":"

        The code to create the index will look something like this:

        Example 2. Create index

        collection.createIndex(\n    \"TypeNameIndex\",\n    ValueIndexConfiguration(\"type\", \"name\")\n)\n
        "},{"location":"indexing/#querybuilder","title":"QueryBuilder","text":"

        Tip

        See the QueryBuilder topic to learn more about QueryBuilder.

        The code to create the index will look something like this:

        Example 3. Create index with QueryBuilder

        collection.createIndex(\n    \"TypeNameIndex\",\n    IndexBuilder.valueIndex(\n        ValueIndexItem.property(\"type\"),\n        ValueIndexItem.property(\"name\")\n    )\n)\n
        "},{"location":"installation/","title":"Installation","text":"

        Add the Kotbase dependency to your Kotlin Multiplatform project in the commonMain source set dependencies of your shared module's build.gradle.kts:

        Enterprise EditionCommunity Edition build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee:3.1.3-1.1.0\")\n        }\n    }\n}\n
        build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite:3.1.3-1.1.0\")\n        }\n    }\n}\n

        Note

        The Couchbase Lite Community Edition is free and open source. The Enterprise Edition is free for development and testing, but requires a license from Couchbase for production use. See Community vs Enterprise Edition.

        Kotbase is published to Maven Central. The Couchbase Lite Enterprise Edition dependency additionally requires the Couchbase Maven repository.

        Enterprise EditionCommunity Edition build.gradle.kts
        repositories {\n    mavenCentral()\n    maven(\"https://mobile.maven.couchbase.com/maven2/dev/\")\n}\n
        build.gradle.kts
        repositories {\n    mavenCentral()\n}\n
        "},{"location":"installation/#native-platforms","title":"Native Platforms","text":"

        Native platform targets should additionally link to the Couchbase Lite dependency native binary. See Supported Platforms for more details.

        "},{"location":"installation/#linux","title":"Linux","text":"

        Targeting JVM running on Linux or native Linux, both require a specific version of the libicu dependency. (You will see an error such as libLiteCore.so: libicuuc.so.71: cannot open shared object file: No such file or directory indicating the expected version.) If the required version isn't available from your distribution's package manager, you can download it from GitHub.

        "},{"location":"integrate-custom-listener/","title":"Integrate Custom Listener","text":"

        Couchbase Lite database peer-to-peer sync \u2014 integrate a custom-built listener

        "},{"location":"integrate-custom-listener/#overview","title":"Overview","text":"

        This is an Enterprise Edition feature.

        This content covers how to integrate a custom MessageEndpointListener solution with Couchbase Lite to handle the data transfer, which is the sending and receiving of data. Where applicable, we discuss how to integrate Couchbase Lite into the workflow.

        The following sections describe a typical Peer-to-Peer workflow.

        "},{"location":"integrate-custom-listener/#peer-discovery","title":"Peer Discovery","text":"

        Peer discovery is the first step. The communication framework will generally include a peer discovery API for devices to advertise themselves on the network and to browse for other peers.

        "},{"location":"integrate-custom-listener/#active-peer","title":"Active Peer","text":"

        The first step is to initialize the Couchbase Lite database.

        "},{"location":"integrate-custom-listener/#passive-peer","title":"Passive Peer","text":"

        In addition to initializing the database, the Passive Peer must initialize the MessageEndpointListener. The MessageEndpointListener acts as a listener for incoming connections.

        val listener = MessageEndpointListener(\n    MessageEndpointListenerConfigurationFactory.newConfig(collections, ProtocolType.MESSAGE_STREAM)\n)\n
        "},{"location":"integrate-custom-listener/#peer-selection-and-connection-setup","title":"Peer Selection and Connection Setup","text":"

        Once a peer device is found, the application code must decide whether it should establish a connection with that peer. This step includes inviting a peer to a session and peer authentication.

        This is handled by the Communication Framework.

        Once the remote peer has been authenticated, the next step is to connect with that peer and initialize the MessageEndpoint API.

        "},{"location":"integrate-custom-listener/#replication-setup","title":"Replication Setup","text":""},{"location":"integrate-custom-listener/#active-peer_1","title":"Active Peer","text":"

        When the connection is established, the Active Peer must instantiate a MessageEndpoint object corresponding to the remote peer.

        // The delegate must implement the `MessageEndpointDelegate` protocol.\nval messageEndpoint = MessageEndpoint(\"UID:123\", \"active\", ProtocolType.MESSAGE_STREAM, delegate)\n

        The MessageEndpoint constructor takes the following arguments:

        1. uid: A unique ID that represents the remote Active Peer.
        2. target: This represents the remote Passive Peer and could be any suitable representation of the remote peer. It could be an ID, URL, etc. If using the Multipeer Connectivity Framework, this could be the MCPeerID.
        3. protocolType: Specifies the kind of transport you intend to implement. There are two options:
          • The default (MESSAGE_STREAM) means that you want to \"send a series of messages\", or in other words the Communication Framework will control the formatting of messages so that there are clear boundaries between messages.
          • The alternative (BYTE_STREAM) means that you just want to send raw bytes over the stream and Couchbase should format for you to ensure that messages get delivered in full. Typically, the Communication Framework will handle message assembly and disassembly, so you would use the MESSAGE_STREAM option in most cases.
        4. delegate: The delegate that will implement the MessageEndpointDelegate protocol, which is a factory for MessageEndpointConnection.

        Then, a Replicator is instantiated with the initialized MessageEndpoint as the target.

        // Create the replicator object.\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        collections = mapOf(collections to null),\n        target = messageEndpoint\n    )\n)\n\n// Start the replication.\nrepl.start()\nthis.replicator = repl\n

        Next, Couchbase Lite will call back the application code through the MessageEndpointDelegate lambda. When the application receives the callback, it must create an instance of MessageEndpointConnection and return it.

        /* implementation of MessageEndpointDelegate */\nval delegate: MessageEndpointDelegate = { endpoint ->\n    ActivePeerConnection()\n}\n

        Next, Couchbase Lite will call back the application code through the MessageEndpointConnection.open() method.

        /* implementation of MessageEndpointConnection */\noverride fun open(connection: ReplicatorConnection, completion: MessagingCompletion) {\n    replicatorConnection = connection\n    completion(true, null)\n}\n

        The connection argument is then set on an instance variable. The application code must keep track of every ReplicatorConnection associated with every MessageEndpointConnection.

        The MessageError argument in the completion block specifies whether the error is recoverable or not. If it is a recoverable error, the replicator will begin a retry process, creating a new MessageEndpointConnection instance.

        "},{"location":"integrate-custom-listener/#passive-peer_1","title":"Passive Peer","text":"

        After connection establishment on the Passive Peer, the first step is to initialize a new MessageEndpointConnection and pass it to the listener. This message tells the listener to accept incoming data from that peer.

        /* implements MessageEndpointConnection */\nval connection = PassivePeerConnection()\nlistener?.accept(connection)\n

        listener is the instance of the MessageEndpointListener that was created in the first step (Peer Discovery ).

        Couchbase Lite will call the application code back through the MessageEndpointConnection.open() method.

        /* implementation of MessageEndpointConnection */\noverride fun open(connection: ReplicatorConnection, completion: MessagingCompletion) {\n    replicatorConnection = connection\n    completion(true, null)\n}\n

        The connection argument is then set on an instance variable. The application code must keep track of every ReplicatorConnection associated with every MessageEndpointConnection.

        At this point, the connection is established, and both peers are ready to exchange data.

        "},{"location":"integrate-custom-listener/#pushpull-replication","title":"Push/Pull Replication","text":"

        Typically, an application needs to send data and receive data. The directionality of the replication could be any of the following:

        • Push only: The data is pushed from the local database to the remote database.
        • Pull only: The data is pulled from the remote database to the local database.
        • Push and Pull: The data is exchanged both ways.

        Usually, the remote is a Sync Gateway database identified through a URL. In Peer-to-Peer syncing, the remote is another Couchbase Lite database.

        The replication lifecycle is handled through the MessageEndpointConnection.

        "},{"location":"integrate-custom-listener/#active-peer_2","title":"Active Peer","text":"

        When Couchbase Lite calls back the application code through the MessageEndpointConnection.send() method, you should send that data to the other peer using the Communication Framework.

        /* implementation of MessageEndpointConnection */\noverride fun send(message: Message, completion: MessagingCompletion) {\n    /* send the data to the other peer */\n    /* ... */\n    /* call the completion handler once the message is sent */\n    completion(true, null)\n}\n

        Once the data is sent, call the completion block to acknowledge the completion. You can use the MessageError in the completion block to specify whether the error is recoverable. If it is a recoverable error, the replicator will begin a retry process, creating a new MessageEndpointConnection.

        When data is received from the Passive Peer via the Communication Framework, you call the ReplicatorConnection.receive() method.

        replicatorConnection?.receive(message)\n

        The ReplicatorConnection\u2019s receive() method is called. Which then processes the data to persist to the local database.

        "},{"location":"integrate-custom-listener/#passive-peer_2","title":"Passive Peer","text":"

        As in the case of the Active Peer, the Passive Peer must implement the MessageEndpointConnection.send() method to send data to the other peer.

        /* implementation of MessageEndpointConnection */\noverride fun send(message: Message, completion: MessagingCompletion) {\n    /* send the data to the other peer */\n    /* ... */\n    /* call the completion handler once the message is sent */\n    completion(true, null)\n}\n

        Once the data is sent, call the completion block to acknowledge the completion. You can use the MessageError in the completion block to specify whether the error is recoverable. If it is a recoverable error, the replicator will begin a retry process, creating a new MessageEndpointConnection.

        When data is received from the Active Peer via the Communication Framework, you call the ReplicatorConnection.receive() method.

        replicatorConnection?.receive(message)\n
        "},{"location":"integrate-custom-listener/#connection-teardown","title":"Connection Teardown","text":"

        When a peer disconnects from a peer-to-peer network, all connected peers are notified. The disconnect notification is a good opportunity to close and remove a replication connection. The steps to tear down the connection are slightly different depending on whether the active or passive peer disconnects first. We will cover each case below.

        "},{"location":"integrate-custom-listener/#initiated-by-active-peer","title":"Initiated by Active Peer","text":""},{"location":"integrate-custom-listener/#active-peer_3","title":"Active Peer","text":"

        When an Active Peer disconnects, it must call the ReplicatorConnection.close() method.

        fun disconnect() {\n    replicatorConnection?.close(null)\n    replicatorConnection = null\n}\n

        Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

        override fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
        "},{"location":"integrate-custom-listener/#passive-peer_3","title":"Passive Peer","text":"

        When the Passive Peer receives the corresponding disconnect notification from the Communication Framework, it must call the ReplicatorConnection.close() method.

        replicatorConnection?.close(null)\n

        Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

        /* implementation of MessageEndpointConnection */\noverride fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
        "},{"location":"integrate-custom-listener/#initiated-by-passive-peer","title":"Initiated by Passive Peer","text":""},{"location":"integrate-custom-listener/#passive-peer_4","title":"Passive Peer","text":"

        When the Passive Peer disconnects, it must class the MessageEndpointListener.closeAll() method.

        listener?.closeAll()\n

        Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

        /* implementation of MessageEndpointConnection */\noverride fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
        "},{"location":"integrate-custom-listener/#active-peer_4","title":"Active Peer","text":"

        When the Active Peer receives the corresponding disconnect notification from the Communication Framework, it must call the ReplicatorConnection.close() method.

        fun disconnect() {\n    replicatorConnection?.close(null)\n    replicatorConnection = null\n}\n

        Then, Couchbase Lite will call back your code through the MessageEndpointConnection.close() to allow the application to disconnect with the Communication Framework.

        override fun close(error: Exception?, completion: MessagingCloseCompletion) {\n    /* disconnect with communications framework */\n    /* ... */\n    /* call completion handler */\n    completion()\n}\n
        "},{"location":"intra-device-sync/","title":"Intra-device Sync","text":"

        Couchbase Lite Database Sync - Synchronize changes between databases on the same device

        "},{"location":"intra-device-sync/#overview","title":"Overview","text":"

        This is an Enterprise Edition feature.

        Couchbase Lite supports replication between two local databases at the database, scope, or collection level. This allows a Couchbase Lite replicator to store data on secondary storage. It is useful in scenarios when a user\u2019s device is damaged and its data is moved to a different device.

        Example 1. Replication between Local Databases

        val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = DatabaseEndpoint(targetDb),\n        collections = mapOf(srcCollections to null),\n        type = ReplicatorType.PUSH\n    )\n)\n\n// Start the replicator\nrepl.start()\n// (be sure to hold a reference somewhere that will prevent it from being GCed)\nthis.replicator = repl\n
        "},{"location":"kermit/","title":"Kermit","text":"

        Kotbase Kermit is a Couchbase Lite custom logger which logs to Kermit. Kermit can direct its logs to any number of log outputs, including the console.

        "},{"location":"kermit/#installation","title":"Installation","text":"Enterprise EditionCommunity Edition build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee-kermit:3.1.3-1.1.0\")\n        }\n    }\n}\n
        build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-kermit:3.1.3-1.1.0\")\n        }\n    }\n}\n
        "},{"location":"kermit/#usage","title":"Usage","text":"
        // Disable default console logs and log to Kermit\nDatabase.log.console.level = LogLevel.NONE\nDatabase.log.custom = KermitCouchbaseLiteLogger(kermit)\n
        "},{"location":"kotlin-extensions/","title":"Kotlin Extensions","text":"

        Couchbase Lite \u2014 Kotlin support

        "},{"location":"kotlin-extensions/#introduction","title":"Introduction","text":"

        In addition to implementing the full Couchbase Lite Java SDK API, Kotbase also provides the additional APIs available in the Couchbase Lite Android KTX SDK, which includes a number of Kotlin-specific extensions.

        This includes:

        • Configuration factories for the configuration of important Couchbase Lite objects such as Databases, Replicators, and Listeners.
        • Change Flows that monitor key Couchbase Lite objects for change using Kotlin features such as, coroutines and Flows.

        Additionally, while not available in the Java SDK, as Java doesn't support operator overloading, Kotbase adds support for Fragment subscript APIs, similar to Couchbase Lite Swift, Objective-C, and .NET.

        "},{"location":"kotlin-extensions/#configuration-factories","title":"Configuration Factories","text":"

        Couchbase Lite provides a set of configuration factories. These allow use of named parameters to specify property settings.

        This makes it simple to create variant configurations, by simply overriding named parameters:

        Example of overriding configuration

        val listener8080 = URLEndpointListenerConfigurationFactory.newConfig(\n    networkInterface = \"en0\",\n    port = 8080\n)\nval listener8081 = listener8080.newConfig(port = 8081)\n
        "},{"location":"kotlin-extensions/#database","title":"Database","text":"

        Use DatabaseConfigurationFactory to create a DatabaseConfiguration object, overriding the receiver\u2019s values with the passed parameters.

        In UseDefinition
        val database = Database(\n    \"getting-started\",\n    DatabaseConfigurationFactory.newConfig()\n)\n
        val DatabaseConfigurationFactory: DatabaseConfiguration? = null\n\nfun DatabaseConfiguration?.newConfig(\n    databasePath: String? = null, \n    encryptionKey: EncryptionKey? = null\n): DatabaseConfiguration\n
        "},{"location":"kotlin-extensions/#replication","title":"Replication","text":"

        Use ReplicatorConfigurationFactory to create a ReplicatorConfiguration object, overriding the receiver\u2019s values with the passed parameters.

        In UseDefinition
        val replicator = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        collections = mapOf(db.collections to null),\n        target = URLEndpoint(\"ws://localhost:4984/getting-started-db\"),\n        type = ReplicatorType.PUSH_AND_PULL,\n        authenticator = BasicAuthenticator(\"sync-gateway\", \"password\".toCharArray())\n    )\n)\n
        val ReplicatorConfigurationFactory: ReplicatorConfiguration? = null\n\npublic fun ReplicatorConfiguration?.newConfig(\n    target: Endpoint? = null,\n    collections: Map<out kotlin.collections.Collection<Collection>, CollectionConfiguration?>? = null,\n    type: ReplicatorType? = null,\n    continuous: Boolean? = null,\n    authenticator: Authenticator? = null,\n    headers: Map<String, String>? = null,\n    pinnedServerCertificate: ByteArray? = null,\n    maxAttempts: Int? = null,\n    maxAttemptWaitTime: Int? = null,\n    heartbeat: Int? = null,\n    enableAutoPurge: Boolean? = null,\n    acceptOnlySelfSignedServerCertificate: Boolean? = null,\n    acceptParentDomainCookies: Boolean? = null\n): ReplicatorConfiguration\n
        "},{"location":"kotlin-extensions/#full-text-search","title":"Full Text Search","text":"

        Use FullTextIndexConfigurationFactory to create a FullTextIndexConfiguration object, overriding the receiver\u2019s values with the passed parameters.

        In UseDefinition
        collection.createIndex(\n    \"overviewFTSIndex\",\n    FullTextIndexConfigurationFactory.newConfig(\"overview\")\n)\n
        val FullTextIndexConfigurationFactory: FullTextIndexConfiguration? = null\n\nfun FullTextIndexConfiguration?.newConfig(\n    vararg expressions: String = emptyArray(), \n    language: String? = null, \n    ignoreAccents: Boolean? = null\n): FullTextIndexConfiguration\n
        "},{"location":"kotlin-extensions/#indexing","title":"Indexing","text":"

        Use ValueIndexConfigurationFactory to create a ValueIndexConfiguration object, overriding the receiver\u2019s values with the passed parameters.

        In UseDefinition
        collection.createIndex(\n    \"TypeNameIndex\",\n    ValueIndexConfigurationFactory.newConfig(\"type\", \"name\")\n)\n
        val ValueIndexConfigurationFactory: ValueIndexConfiguration? = null\n\nfun ValueIndexConfiguration?.newConfig(vararg expressions: String = emptyArray()): ValueIndexConfiguration\n
        "},{"location":"kotlin-extensions/#logs","title":"Logs","text":"

        Use LogFileConfigurationFactory to create a LogFileConfiguration object, overriding the receiver\u2019s values with the passed parameters.

        In UseDefinition
        Database.log.file.apply {\n    config = LogFileConfigurationFactory.newConfig(\n        directory = \"path/to/temp/logs\",\n        maxSize = 10240,\n        maxRotateCount = 5,\n        usePlainText = false\n    )\n    level = LogLevel.INFO\n}\n
        val LogFileConfigurationFactory: LogFileConfiguration? = null\n\nfun LogFileConfiguration?.newConfig(\n    directory: String? = null,\n    maxSize: Long? = null,\n    maxRotateCount: Int? = null,\n    usePlainText: Boolean? = null\n): LogFileConfiguration\n
        "},{"location":"kotlin-extensions/#change-flows","title":"Change Flows","text":"

        These wrappers use Flows to monitor for changes.

        "},{"location":"kotlin-extensions/#collection-change-flow","title":"Collection Change Flow","text":"

        Use the Collection.collectionChangeFlow() to monitor collection change events.

        In UseDefinition
        scope.launch {\n    collection.collectionChangeFlow()\n        .map { it.documentIDs }\n        .collect { docIds: List<String> ->\n            // handle changes\n        }\n}\n
        fun Collection.collectionChangeFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<CollectionChange>\n
        "},{"location":"kotlin-extensions/#document-change-flow","title":"Document Change Flow","text":"

        Use Collection.documentChangeFlow() to monitor changes to a document.

        In UseDefinition
        scope.launch {\n    collection.documentChangeFlow(\"1001\")\n        .map { it.collection.getDocument(it.documentID)?.getString(\"lastModified\") }\n        .collect { lastModified: String? ->\n            // handle document changes\n        }\n}\n
        fun Collection.documentChangeFlow(\n    documentId: String, \n    coroutineContext: CoroutineContext? = null\n): Flow<DocumentChange>\n
        "},{"location":"kotlin-extensions/#replicator-change-flow","title":"Replicator Change Flow","text":"

        Use Replicator.replicatorChangeFlow() to monitor replicator changes.

        In UseDefinition
        scope.launch {\n    repl.replicatorChangesFlow()\n        .map { it.status.activityLevel }\n        .collect { activityLevel: ReplicatorActivityLevel ->\n            // handle replicator changes\n        }\n}\n
        fun Replicator.replicatorChangesFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<ReplicatorChange>\n
        "},{"location":"kotlin-extensions/#document-replicator-change-flow","title":"Document Replicator Change Flow","text":"

        Use Replicator.documentReplicationFlow() to monitor document changes during replication.

        In UseDefinition
        scope.launch {\n    repl.documentReplicationFlow()\n        .map { it.documents }\n        .collect { docs: List<ReplicatedDocument> ->\n            // handle replicated documents\n        }\n}\n
        fun Replicator.documentReplicationFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<DocumentReplication>\n
        "},{"location":"kotlin-extensions/#query-change-flow","title":"Query Change Flow","text":"

        Use Query.queryChangeFlow() to monitor changes to a query.

        In UseDefinition
        scope.launch {\n    query.queryChangeFlow()\n        .mapNotNull { change ->\n            val err = change.error\n            if (err != null) {\n                throw err\n            }\n            change.results?.allResults()\n        }\n        .collect { results: List<Result> ->\n            // handle query results\n        }\n}\n
        fun Query.queryChangeFlow(\n    coroutineContext: CoroutineContext? = null\n): Flow<QueryChange>\n
        "},{"location":"kotlin-extensions/#fragment-subscripts","title":"Fragment Subscripts","text":"

        Kotbase uses Kotlin's indexed access operator to implement Couchbase Lite's Fragment subscript APIs for Database, Collection, Document, Array, Dictionary, and Result, for concise, type-safe, and null-safe access to arbitrary values in a nested JSON object. MutableDocument, MutableArray, and MutableDictionary also support the MutableFragment APIs for mutating values.

        Supported types can get Fragment or MutableFragment objects by either index or key. Fragment objects represent an arbitrary entry in a key path, themselves supporting subscript access to nested values.

        Finally, the typed optional value at the end of a key path can be accessed or set with the Fragment properties, e.g. array, dictionary, string, int, date, etc.

        Subscript API examples

        val db = Database(\"db\")\nval coll = db.defaultCollection\nval doc = coll[\"doc-id\"]       // DocumentFragment\ndoc.exists                     // true or false\ndoc.document                   // \"doc-id\" Document from Database\ndoc[\"array\"].array             // Array value from \"array\" key\ndoc[\"array\"][0].string         // String value from first Array item\ndoc[\"dict\"].dictionary         // Dictionary value from \"dict\" key\ndoc[\"dict\"][\"num\"].int         // Int value from Dictionary \"num\" key\ncoll[\"milk\"][\"exp\"].date       // Instant value from \"exp\" key from \"milk\" Document\nval newDoc = MutableDocument(\"new-id\")\nnewDoc[\"name\"].value = \"Sally\" // set \"name\" value\n
        "},{"location":"ktx/","title":"KTX","text":"

        The KTX extensions include the excellent Kotlin extensions by MOLO17, as well as other convenience functions for composing queries, observing change Flows, and creating indexes.

        "},{"location":"ktx/#installation","title":"Installation","text":"Enterprise EditionCommunity Edition build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee-ktx:3.1.3-1.1.0\")\n        }\n    }\n}\n
        build.gradle.kts
        kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ktx:3.1.3-1.1.0\")\n        }\n    }\n}\n
        "},{"location":"ktx/#usage","title":"Usage","text":""},{"location":"ktx/#querybuilder-extensions","title":"QueryBuilder extensions","text":"

        The syntax for building a query is more straight-forward thanks to Kotlin's infix function support.

        select(all()) from collection where { \"type\" equalTo \"user\" }\n

        Or just a bunch of fields:

        select(\"name\", \"surname\") from collection where { \"type\" equalTo \"user\" }\n

        Or if you also want the document ID:

        select(Meta.id, all()) from collection where { \"type\" equalTo \"user\" }\nselect(Meta.id, \"name\", \"surname\") from collection where { \"type\" equalTo \"user\" }\n

        You can even do more powerful querying:

        select(\"name\", \"type\")\n    .from(collection)\n    .where {\n        ((\"type\" equalTo \"user\") and (\"name\" equalTo \"Damian\")) or\n        ((\"type\" equalTo \"pet\") and (\"name\" like \"Kitt\"))\n    }\n    .orderBy { \"name\".ascending() }\n    .limit(10)\n

        There are also convenience extensions for performing SELECT COUNT(*) queries:

        val query = selectCount() from collection where { \"type\" equalTo \"user\" }\nval count = query.execute().countResult()\n
        "},{"location":"ktx/#document-builder-dsl","title":"Document builder DSL","text":"

        For creating a MutableDocument ready to be saved, you can use a Kotlin builder DSL:

        val document = MutableDocument {\n    \"name\" to \"Damian\"\n    \"surname\" to \"Giusti\"\n    \"age\" to 24\n    \"pets\" to listOf(\"Kitty\", \"Kitten\", \"Kitto\")\n    \"type\" to \"user\"\n}\n\ncollection.save(document)\n
        "},{"location":"ktx/#collection-creation-functions","title":"Collection creation functions","text":"

        You can create a MutableArray or MutableDictionary using idiomatic vararg functions:

        mutableArrayOf(\"hello\", 42, true)\nmutableDictOf(\"key1\" to \"value1\", \"key2\" to 2, \"key3\" to null)\n

        The similar mutableDocOf function allows nesting dictionary types, unlike the MutableDocument DSL:

        mutableDocOf(\n    \"string\" to \"hello\",\n    \"number\" to 42,\n    \"array\" to mutableArrayOf(1, 2, 3),\n    \"dict\" to mutableDictOf(\"key\" to \"value\")\n)\n
        "},{"location":"ktx/#flow-support","title":"Flow support","text":"

        Supplementing the Flow APIs from Couchbase Lite Android KTX present in the base couchbase-lite modules, Kotbase KTX adds some additional useful Flow APIs.

        "},{"location":"ktx/#query-flow","title":"Query Flow","text":"

        Query.asFlow() builds on top of Query.queryChangeFlow() to emit non-null ResultSets and throw any QueryChange errors.

        select(all())\n    .from(collection)\n    .where { \"type\" equalTo \"user\" }\n    .asFlow()\n    .collect { value: ResultSet -> \n        // consume ResultSet\n    }\n
        "},{"location":"ktx/#document-flow","title":"Document Flow","text":"

        Unlike Collection.documentChangeFlow(), which only emits DocumentChanges, Collection.documentFlow() handles the common use case of getting the initial document state and observing changes from the collection, enabling reactive UI patterns.

        collection.documentFlow(\"userProfile\")\n    .collect { doc: Document? ->\n        // consume Document\n    }\n
        "},{"location":"ktx/#resultset-model-mapping","title":"ResultSet model mapping","text":""},{"location":"ktx/#map-delegation","title":"Map delegation","text":"

        Thanks to Map delegation, mapping a ResultSet to a Kotlin class has never been so easy.

        The library provides the ResultSet.toObjects() and Query.asObjectsFlow() extensions for helping to map results given a factory lambda.

        Such factory lambdas accept a Map<String, Any?> and return an instance of a certain type. Those requirements fit perfectly with a Map-delegated class.

        class User(map: Map<String, Any?>) {\n    val name: String by map\n    val surname: String by map\n    val age: Int by map\n}\n\nval users: List<User> = query.execute().toObjects(::User)\n\nval usersFlow: Flow<List<User>> = query.asObjectsFlow(::User)\n
        "},{"location":"ktx/#json-deserialization","title":"JSON deserialization","text":"

        Kotbase KTX also provides extensions for mapping documents from a JSON string to Kotlin class. This works well together with a serialization library, like kotlinx-serialization, to decode the JSON string to a Kotlin object.

        @Serializable\nclass User(\n    val name: String,\n    val surname: String,\n    val age: Int\n)\n\nval users: List<User> = query.execute().toObjects { json: String ->\n    Json.decodeFromString<User>(json)\n}\n\nval usersFlow: Flow<List<User>> = query.asObjectsFlow { json: String ->\n    Json.decodeFromString<User>(json)\n}\n
        "},{"location":"ktx/#index-creation","title":"Index creation","text":"

        Kotbase KTX provides concise top-level functions for index creation:

        collection.createIndex(\"typeNameIndex\", valueIndex(\"type\", \"name\"))\ncollection.createIndex(\"overviewFTSIndex\", fullTextIndex(\"overview\"))\n
        "},{"location":"ktx/#replicator-extensions","title":"Replicator extensions","text":"

        For the Android platform, you can bind the Replicator start() and stop() methods to be performed automatically when your Lifecycle-enabled component gets resumed or paused.

        // Binds the Replicator to the Application lifecycle.\nreplicator.bindToLifecycle(ProcessLifecycleOwner.get().lifecycle)\n
        // Binds the Replicator to the Activity/Fragment lifecycle.\n// inside an Activity or Fragment...\noverride fun onCreate(savedInstanceState: Bundle?) {\n    replicator.bindToLifecycle(lifecycle)\n}\n

        That's it! The Replicator will be automatically started when your component passes the ON_RESUME state, and it will be stopped when the component passes the ON_PAUSED state. As you may imagine, no further action will be made after the ON_DESTROY state.

        "},{"location":"license/","title":"License","text":"

        Copyright 2023 Jeff Lockhart

        Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

        Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

        "},{"location":"license/#third-party-licenses","title":"Third Party Licenses","text":"
        • AndroidX
        • Couchbase Lite
        • Dokka
        • Kermit
        • KorIO
        • Kotlin
        • kotlinx-atomicfu
        • kotlinx-binary-compatibility-validator
        • kotlinx-coroutines
        • kotlinx-datetime
        • kotlinx-io
        • kotlinx-kover
        • kotlinx-serialization
        • Material for MkDocs
        • Mike
        • MkDocs
        • MkDocs Macros Plugin
        • MockK
        • MOLO17 Couchbase Lite Kotlin
        • Multiplatform Paging
        • Stately
        • vanniktech gradle-maven-publish-plugin
        "},{"location":"live-queries/","title":"Live Queries","text":"

        Couchbase Lite database data querying concepts \u2014 live queries

        "},{"location":"live-queries/#activating-a-live-query","title":"Activating a Live Query","text":"

        A live query is a query that, once activated, remains active and monitors the database for changes; refreshing the result set whenever a change occurs. As such, it is a great way to build reactive user interfaces \u2014 especially table/list views \u2014 that keep themselves up to date.

        So, a simple use case may be: A replicator running and pulling new data from a server, whilst a live-query-driven UI automatically updates to show the data without the user having to manually refresh. This helps your app feel quick and responsive.

        To activate a live query, just add a change listener to the query statement. It will be immediately active. When a change is detected the query automatically runs, and posts the new query result to any observers (change listeners).

        Example 1. Starting a Live Query

        val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection)) \n\n// Adds a query change listener.\n// Changes will be posted on the main queue.\nval token = query.addChangeListener { change ->\n    change.results?.let { rs ->\n        rs.forEach {\n            println(\"results: ${it.keys}\")\n            /* Update UI */\n        }\n    } \n}\n
        1. Build the query statements.
        2. Activate the live query by attaching a listener. Save the token in order to detach the listener and stop the query later \u2014 see Example 2.

        Example 2. Stop a Live Query

        token.remove()\n

        Here we use the change lister token from Example 1 to remove the listener. Doing so stops the live query.

        "},{"location":"live-queries/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

        Kotlin developers also have the option of using Flows to feed query changes to the UI.

        Define a live query as a Flow and activate a collector in the view creation function.

        fun watchQuery(query: Query): Flow<List<Result>> {\n    return query.queryChangeFlow()\n        .mapNotNull { change ->\n            val err = change.error\n            if (err != null) {\n               throw err\n            }\n            change.results?.allResults()\n        }\n}\n
        "},{"location":"n1ql-query-builder-differences/","title":"SQL++ QueryBuilder Differences","text":"

        Differences between Couchbase Lite\u2019s QueryBuilder and SQL++ for Mobile

        Couchbase Lite\u2019s SQL++ for Mobile supports all QueryBuilder features, except Predictive Query and Index. See Table 1 for the features supported by SQL++ but not by QueryBuilder.

        Table 1. QueryBuilder Differences

        Category Components Conditional Operator CASE(WHEN \u2026 THEN \u2026 ELSE \u2026) Array Functions ARRAY_AGG ARRAY_AVG ARRAY_COUNT ARRAY_IFNULL ARRAY_MAX ARRAY_MIN ARRAY_SUM Conditional Functions IFMISSING IFMISSINGORNULL IFNULL MISSINGIF NULLIF Match Functions DIV IDIV ROUND_EVEN Pattern Matching Functions REGEXP_CONTAINS REGEXP_LIKE REGEXP_POSITION REGEXP_REPLACE Type Checking Functions ISARRAY ISATOM ISBOOLEAN ISNUMBER ISOBJECT ISSTRING TYPE Type Conversion Functions TOARRAY TOATOM TOBOOLEAN TONUMBER TOOBJECT TOSTRING"},{"location":"n1ql-query-strings/","title":"SQL++ Query Strings","text":"

        How to use SQL++ query strings to build effective queries with Kotbase

        Note

        The examples used in this topic are based on the Travel Sample app and data introduced in the Couchbase Mobile Workshop tutorial.

        "},{"location":"n1ql-query-strings/#introduction","title":"Introduction","text":"

        Developers using Kotbase can provide SQL++ query strings using the SQL++ Query API. This API uses query statements of the form shown in Example 2.

        The structure and semantics of the query format are based on that of Couchbase Server\u2019s SQL++ query language \u2014 see SQL++ Reference Guide and SQL++ Data Model.

        "},{"location":"n1ql-query-strings/#running","title":"Running","text":"

        The database can create a query object with the SQL++ string. See Query Result Sets for how to work with result sets.

        Example 1. Running a SQL++ Query

        val query = database.createQuery(\n    \"SELECT META().id AS id FROM _ WHERE type = \\\"hotel\\\"\"\n)\nreturn query.execute().use { rs -> rs.allResults() }\n

        We are accessing the current database using the shorthand notation _ \u2014 see the FROM clause for more on data source selection and Query Parameters for more on parameterized queries.

        "},{"location":"n1ql-query-strings/#query-format","title":"Query Format","text":"

        The API uses query statements of the form shown in Example 2.

        Example 2. Query Format

        SELECT ____\nFROM 'data-source'\nWHERE ____,\nJOIN ____\nGROUP BY ____\nORDER BY ____\nLIMIT ____\nOFFSET ____\n

        Query Components

        Component Description SELECT statement The document properties that will be returned in the result set FROM The data source to be queried WHERE statement The query criteriaThe SELECTed properties of documents matching this criteria will be returned in the result set JOIN statement The criteria for joining multiple documents GROUP BY statement The criteria used to group returned items in the result set ORDER BY statement The criteria used to order the items in the result set LIMIT statement The maximum number of results to be returned OFFSET statement The number of results to be skipped before starting to return results

        Tip

        We recommend working through the SQL++ Tutorials to build your SQL++ skills.

        "},{"location":"n1ql-query-strings/#select-statement","title":"SELECT statement","text":""},{"location":"n1ql-query-strings/#purpose","title":"Purpose","text":"

        Projects the result returned by the query, identifying the columns it will contain.

        "},{"location":"n1ql-query-strings/#syntax","title":"Syntax","text":"

        Example 3. SQL++ Select Syntax

        select = SELECT _ ( DISTINCT | ALL )? selectResult\n\nselectResults = selectResult ( _ ',' _ selectResult )*\n\nselectResult = expression ( _ (AS)? columnAlias )?\n\ncolumnAlias = IDENTIFIER\n
        "},{"location":"n1ql-query-strings/#arguments","title":"Arguments","text":"
        1. The select clause begins with the SELECT keyword.
          • The optional ALL argument is used to specify that the query should return ALL results (the default).
          • The optional DISTINCT argument specifies that the query should remove duplicated results.
        2. selectResults is a list of columns projected in the query result. Each column is an expression which could be a property expression or any expressions or functions. You can use the wildcard * to select all columns \u2014 see Select Wildcard.
        3. Use the optional AS argument to provide an alias name for a property. Each property can be aliased by putting the AS <alias name> after the column name.
        "},{"location":"n1ql-query-strings/#select-wildcard","title":"Select Wildcard","text":"

        When using the SELECT * option the column name (key) of the SQL++ string is one of:

        • The alias name if one was specified
        • The data source name (or its alias if provided) as specified in the FROM clause.

        This behavior is inline with that of Couchbase Server SQL++ \u2014 see example in Table 1.

        Table 1. Example Column Names for SELECT *

        Query Column Name SELECT * AS data FROM _ data SELECT * FROM _ _ SELECT * FROM _default _default SELECT * FROM db db SELECT * FROM db AS store store"},{"location":"n1ql-query-strings/#example","title":"Example","text":"

        Example 4. SELECT properties

        SELECT *\n\nSELECT db.* AS data\n\nSELECT name fullName\n\nSELECT db.name fullName\n\nSELECT DISTINCT address.city\n
        1. Use the * wildcard to select all properties.
        2. Select all properties from the db data source. Give the object an alias name of data.
        3. Select a pair of properties.
        4. Select a specific property from the db data source.
        5. Select the property item city from its parent property address.

        See Query Result Sets for more on processing query results.

        "},{"location":"n1ql-query-strings/#from","title":"FROM","text":""},{"location":"n1ql-query-strings/#purpose_1","title":"Purpose","text":"

        Specifies the data source, or sources, and optionally applies an alias (AS). It is mandatory.

        "},{"location":"n1ql-query-strings/#syntax_1","title":"Syntax","text":"
        FROM dataSource\n      (optional JOIN joinClause )\n
        "},{"location":"n1ql-query-strings/#datasource","title":"Datasource","text":"

        A datasource can be:

        • < database-name > : default collection
        • _ (underscore) : default collection
        • < scope-name >.< collection-name > : a collection in a scope
        • < collection-name > : a collection in the default scope
        "},{"location":"n1ql-query-strings/#arguments_1","title":"Arguments","text":"
        1. Here dataSource is the database name against which the query is to run or the .. Use AS to give the database an alias you can use within the query. To use the current database, without specifying a name, use _ as the datasource.
        2. JOIN joinclause \u2014 use this optional argument to link data sources \u2014 see JOIN statement.
        3. "},{"location":"n1ql-query-strings/#example_1","title":"Example","text":"

          Example 5. FROM clause

          SELECT name FROM db\nSELECT name FROM scope.collection\nSELECT store.name FROM db AS store\nSELECT store.name FROM db store\nSELECT name FROM _\nSELECT store.name FROM _ AS store\nSELECT store.name FROM _ store\n
          "},{"location":"n1ql-query-strings/#join-statement","title":"JOIN statement","text":""},{"location":"n1ql-query-strings/#purpose_2","title":"Purpose","text":"

          The JOIN clause enables you to select data from multiple data sources linked by criteria specified in the JOIN statement.

          Currently only self-joins are supported. For example to combine airline details with route details, linked by the airline id \u2014 see Example 6.

          "},{"location":"n1ql-query-strings/#syntax_2","title":"Syntax","text":"
          joinClause = ( join )*\n\njoin = joinOperator _ dataSource _  (constraint)?\n\njoinOperator = ( LEFT (OUTER)? | INNER | CROSS )? JOIN\n\ndataSource = databaseName ( ( AS | _ )? databaseAlias )?\n\nconstraint ( ON expression )?\n
          "},{"location":"n1ql-query-strings/#arguments_2","title":"Arguments","text":"
          1. The join clause starts with a JOIN operator followed by the data source.
          2. Five JOIN operators are supported: JOIN, LEFT JOIN, LEFT OUTER JOIN, INNER JOIN, and CROSS JOIN. Note: JOIN and INNER JOIN are the same, LEFT JOIN and LEFT OUTER JOIN are the same.
          3. The join constraint starts with the ON keyword followed by the expression that defines the joining constraints.
          "},{"location":"n1ql-query-strings/#example_2","title":"Example","text":"
          SELECT db.prop1, other.prop2 FROM db JOIN db AS other ON db.key = other.key\n\nSELECT db.prop1, other.prop2 FROM db LEFT JOIN db other ON db.key = other.key\n\nSELECT * FROM route r JOIN airline a ON r.airlineid = meta(a).id WHERE a.country = \"France\"\n

          Example 6. Using JOIN to Combine Document Details

          This example JOINS the document of type route with documents of type airline using the document ID (_id) on the airline document and airlineid on the route document.

          SELECT * FROM travel-sample r JOIN travel-sample a ON r.airlineid = a.meta.id WHERE a.country = \"France\"\n
          "},{"location":"n1ql-query-strings/#where-statement","title":"WHERE statement","text":""},{"location":"n1ql-query-strings/#purpose_3","title":"Purpose","text":"

          Specifies the selection criteria used to filter results.

          As with SQL, use the WHERE statement to choose which documents are returned by your query.

          "},{"location":"n1ql-query-strings/#syntax_3","title":"Syntax","text":"
          where = WHERE expression\n
          "},{"location":"n1ql-query-strings/#arguments_3","title":"Arguments","text":"

          WHERE evaluates expression to a BOOLEAN value. You can chain any number of expressions in order to implement sophisticated filtering capabilities.

          See also \u2014 Operators for more on building expressions and Query Parameters for more on parameterized queries.

          "},{"location":"n1ql-query-strings/#examples","title":"Examples","text":"
          SELECT name FROM db WHERE department = 'engineer' AND group = 'mobile'\n
          "},{"location":"n1ql-query-strings/#group-by-statement","title":"GROUP BY statement","text":""},{"location":"n1ql-query-strings/#purpose_4","title":"Purpose","text":"

          Use GROUP BY to arrange values in groups of one or more properties.

          "},{"location":"n1ql-query-strings/#syntax_4","title":"Syntax","text":"
          groupBy = grouping _( having )?\n\ngrouping = GROUP BY expression( _ ',' _ expression )*\n\nhaving = HAVING expression\n
          "},{"location":"n1ql-query-strings/#arguments_4","title":"Arguments","text":"
          1. The group by clause starts with the GROUP BY keyword followed by one or more expressions.
          2. grouping \u2014 the group by clause is normally used together with the aggregate functions (e.g. COUNT, MAX, MIN, SUM, AVG).
          3. having \u2014 allows you to filter the result based on aggregate functions \u2014 for example, HAVING count(empnum)>100.
          "},{"location":"n1ql-query-strings/#examples_1","title":"Examples","text":"
          SELECT COUNT(empno), city FROM db GROUP BY city\n\nSELECT COUNT(empno), city FROM db GROUP BY city HAVING COUNT(empno) > 100\n\nSELECT COUNT(empno), city FROM db GROUP BY city HAVING COUNT(empno) > 100 WHERE state = 'CA'\n
          "},{"location":"n1ql-query-strings/#order-by-statement","title":"ORDER BY statement","text":""},{"location":"n1ql-query-strings/#purpose_5","title":"Purpose","text":"

          Sort query results based on a given expression result.

          "},{"location":"n1ql-query-strings/#syntax_5","title":"Syntax","text":"
          orderBy = ORDER BY ordering ( _ ',' _ ordering )*\n\nordering = expression ( _ order )?\n\norder = ( ASC / DESC )\n
          "},{"location":"n1ql-query-strings/#arguments_5","title":"Arguments","text":"
          1. orderBy \u2014 The order by clause starts with the ORDER BY keyword followed by the ordering clause.
          2. ordering \u2014 The ordering clause specifies the properties or expressions to use for ordering the results.
          3. order \u2014 In each ordering clause, the sorting direction is specified using the optional ASC (ascending) or DESC (descending) directives. Default is ASC.
          "},{"location":"n1ql-query-strings/#examples_2","title":"Examples","text":"

          Example 7. Simple usage

          SELECT name FROM db  ORDER BY name\n\nSELECT name FROM db  ORDER BY name DESC\n\nSELECT name, score FROM db  ORDER BY name ASC, score DESC\n
          "},{"location":"n1ql-query-strings/#limit-statement","title":"LIMIT statement","text":""},{"location":"n1ql-query-strings/#purpose_6","title":"Purpose","text":"

          Specifies the maximum number of results to be returned by the query.

          "},{"location":"n1ql-query-strings/#syntax_6","title":"Syntax","text":"
          limit = LIMIT expression\n
          "},{"location":"n1ql-query-strings/#arguments_6","title":"Arguments","text":"

          The limit clause starts with the LIMIT keyword followed by an expression that will be evaluated as a number.

          "},{"location":"n1ql-query-strings/#examples_3","title":"Examples","text":"

          Example 8. Simple usage

          SELECT name FROM db LIMIT 10\n

          Return only 10 results

          "},{"location":"n1ql-query-strings/#offset-statement","title":"OFFSET statement","text":""},{"location":"n1ql-query-strings/#purpose_7","title":"Purpose","text":"

          Specifies the number of results to be skipped by the query.

          "},{"location":"n1ql-query-strings/#syntax_7","title":"Syntax","text":"
          offset = OFFSET expression\n
          "},{"location":"n1ql-query-strings/#arguments_7","title":"Arguments","text":"

          The offset clause starts with the OFFSET keyword followed by an expression that will be evaluated as a number that represents the number of results ignored before the query begins returning results.

          "},{"location":"n1ql-query-strings/#examples_4","title":"Examples","text":"

          Example 9. Simple usage

          SELECT name FROM db OFFSET 10\n\nSELECT name FROM db  LIMIT 10 OFFSET 10\n
          1. Ignore first 10 results
          2. Ignore first 10 results then return the next 10 results
          "},{"location":"n1ql-query-strings/#expressions","title":"Expressions","text":"

          In this section Literals | Identifiers | Property Expressions | Any and Every Expressions | Parameter Expressions | Parenthesis Expressions

          Expressions are references to identifiers that resolve to values. Categories of expression comprise the elements covered in this section (see above), together with Operators and Functions, which are covered in their own sections.

          "},{"location":"n1ql-query-strings/#literals","title":"Literals","text":"

          Boolean | Numeric | String | NULL | MISSING | Array | Dictionary

          "},{"location":"n1ql-query-strings/#boolean","title":"Boolean","text":""},{"location":"n1ql-query-strings/#purpose_8","title":"Purpose","text":"

          Represents a true or false value.

          "},{"location":"n1ql-query-strings/#syntax_8","title":"Syntax","text":"

          TRUE | FALSE

          "},{"location":"n1ql-query-strings/#example_3","title":"Example","text":"
          SELECT value FROM db  WHERE value = true\nSELECT value FROM db  WHERE value = false\n
          "},{"location":"n1ql-query-strings/#numeric","title":"Numeric","text":""},{"location":"n1ql-query-strings/#purpose_9","title":"Purpose","text":"

          Represents a numeric value. Numbers may be signed or unsigned digits. They have optional fractional and exponent components.

          "},{"location":"n1ql-query-strings/#syntax_9","title":"Syntax","text":"
          '-'? (('.' DIGIT+) | (DIGIT+ ('.' DIGIT*)?)) ( [Ee] [-+]? DIGIT+ )? WB\n\nDIGIT = [0-9]\n
          "},{"location":"n1ql-query-strings/#example_4","title":"Example","text":"
          SELECT value FROM db  WHERE value = 10\nSELECT value FROM db  WHERE value = 0\nSELECT value FROM db WHERE value = -10\nSELECT value FROM db WHERE value = 10.25\nSELECT value FROM db WHERE value = 10.25e2\nSELECT value FROM db WHERE value = 10.25E2\nSELECT value FROM db WHERE value = 10.25E+2\nSELECT value FROM db WHERE value = 10.25E-2\n
          "},{"location":"n1ql-query-strings/#string","title":"String","text":""},{"location":"n1ql-query-strings/#purpose_10","title":"Purpose","text":"

          The string literal represents a string or sequence of characters.

          "},{"location":"n1ql-query-strings/#syntax_10","title":"Syntax","text":"
          \"characters\" | 'characters'\n

          The string literal can be double-quoted as well as single-quoted.

          "},{"location":"n1ql-query-strings/#example_5","title":"Example","text":"
          SELECT firstName, lastName FROM db WHERE middleName = \"middle\"\nSELECT firstName, lastName FROM db WHERE middleName = 'middle'\n
          "},{"location":"n1ql-query-strings/#null","title":"NULL","text":""},{"location":"n1ql-query-strings/#purpose_11","title":"Purpose","text":"

          The literal NULL represents an empty value.

          "},{"location":"n1ql-query-strings/#syntax_11","title":"Syntax","text":"
          NULL\n
          "},{"location":"n1ql-query-strings/#example_6","title":"Example","text":"
          SELECT firstName, lastName FROM db WHERE middleName IS NULL\n
          "},{"location":"n1ql-query-strings/#missing","title":"MISSING","text":""},{"location":"n1ql-query-strings/#purpose_12","title":"Purpose","text":"

          The MISSING literal represents a missing name-value pair in a document.

          "},{"location":"n1ql-query-strings/#syntax_12","title":"Syntax","text":"
          MISSING\n
          "},{"location":"n1ql-query-strings/#example_7","title":"Example","text":"
          SELECT firstName, lastName FROM db WHERE middleName IS MISSING\n
          "},{"location":"n1ql-query-strings/#array","title":"Array","text":""},{"location":"n1ql-query-strings/#purpose_13","title":"Purpose","text":"

          Represents an Array.

          "},{"location":"n1ql-query-strings/#syntax_13","title":"Syntax","text":"
          arrayLiteral = '[' _ (expression ( _ ',' _ e2:expression )* )? ']'\n
          "},{"location":"n1ql-query-strings/#example_8","title":"Example","text":"
          SELECT [\"a\", \"b\", \"c\"] FROM _\nSELECT [ property1, property2, property3] FROM _\n
          "},{"location":"n1ql-query-strings/#dictionary","title":"Dictionary","text":""},{"location":"n1ql-query-strings/#purpose_14","title":"Purpose","text":"

          Represents a dictionary literal.

          "},{"location":"n1ql-query-strings/#syntax_14","title":"Syntax","text":"
          dictionaryLiteral = '{' _ ( STRING_LITERAL ':' e:expression\n  ( _ ',' _ STRING_LITERAL ':' _ expression )* )?\n   '}'\n
          "},{"location":"n1ql-query-strings/#example_9","title":"Example","text":"
          SELECT { 'name': 'James', 'department': 10 } FROM db\nSELECT { 'name': 'James', 'department': dept } FROM db\nSELECT { 'name': 'James', 'phones': ['650-100-1000', '650-100-2000'] } FROM db\n
          "},{"location":"n1ql-query-strings/#identifiers","title":"Identifiers","text":""},{"location":"n1ql-query-strings/#purpose_15","title":"Purpose","text":"

          Identifiers provide symbolic references. Use them for example to identify: column alias names, database names, database alias names, property names, parameter names, function names, and FTS index names.

          "},{"location":"n1ql-query-strings/#syntax_15","title":"Syntax","text":"
          <[a-zA-Z_] [a-zA-Z0-9_$]*> _ | \"`\" ( [^`] | \"``\"   )* \"`\"  _\n

          The identifier allows a-z, A-Z, 0-9, _ (underscore), and $ character. The identifier is case-sensitive.

          Tip

          To use other characters in the identifier, surround the identifier with the backtick ` character.

          "},{"location":"n1ql-query-strings/#example_10","title":"Example","text":"

          Example 10. Identifiers

          SELECT * FROM _\n\nSELECT * FROM `db-1`\n\nSELECT key FROM db\n\nSELECT key$1 FROM db_1\n\nSELECT `key-1` FROM db\n

          Use of backticks allows a hyphen as part of the identifier name.

          "},{"location":"n1ql-query-strings/#property-expressions","title":"Property Expressions","text":""},{"location":"n1ql-query-strings/#purpose_16","title":"Purpose","text":"

          The property expression is used to reference a property in a document.

          "},{"location":"n1ql-query-strings/#syntax_16","title":"Syntax","text":"
          property = '*'| dataSourceName '.' _ '*'  | propertyPath\n\npropertyPath = propertyName (\n    ('.' _ propertyName ) |\n    ('[' _ INT_LITERAL _ ']' _  )\n    )*\n\npropertyName = IDENTIFIER\n
          1. Prefix the property expression with the data source name or alias to indicate its origin.
          2. Use dot syntax to refer to nested properties in the propertyPath.
          3. Use bracket ([index]) syntax to refer to an item in an array.
          4. Use the asterisk (*) character to represents all properties. This can only be used in the result list of the SELECT clause.
          "},{"location":"n1ql-query-strings/#example_11","title":"Example","text":"

          Example 11. Property Expressions

          SELECT *\n  FROM db\n  WHERE contact.name = \"daniel\"\n\nSELECT db.*\n  FROM db\n  WHERE collection.contact.name = \"daniel\"\n\nSELECT collection.contact.address.city\n  FROM scope.collection\n  WHERE collection.contact.name = \"daniel\"\n\nSELECT contact.address.city\n  FROM scope.collection\n  WHERE contact.name = \"daniel\"\n\nSELECT contact.address.city, contact.phones[0]\n  FROM db\n  WHERE contact.name = \"daniel\"\n
          "},{"location":"n1ql-query-strings/#any-and-every-expressions","title":"Any and Every Expressions","text":""},{"location":"n1ql-query-strings/#purpose_17","title":"Purpose","text":"

          Evaluates expressions over items in an array object.

          "},{"location":"n1ql-query-strings/#syntax_17","title":"Syntax","text":"
          arrayExpression = \n  anyEvery _ variableName \n     _ IN  _ expression \n       _ SATISFIES _ expression \n    END \n\nanyEvery = anyOrSome AND EVERY | anyOrSome | EVERY\n\nanyOrSome = ANY | SOME\n
          1. The array expression starts with ANY/SOME, EVERY, or ANY/SOME AND EVERY, each of which has a different function as described below, and is terminated by END
            • ANY/SOME: Returns TRUE if at least one item in the array satisfies the expression, otherwise returns FALSE. NOTE: ANY and SOME are interchangeable.
            • EVERY: Returns TRUE if all items in the array satisfies the expression, otherwise return FALSE. If the array is empty, returns TRUE.
            • ANY/SOME AND EVERY: Same as EVERY but returns FALSE if the array is empty.
          2. The variable name represents each item in the array.
          3. The IN keyword is used for specifying the array to be evaluated.
          4. The SATISFIES keyword is used for evaluating each item in the array.
          5. END terminates the array expression.
          "},{"location":"n1ql-query-strings/#example_12","title":"Example","text":"

          Example 12. ALL and Every Expressions

          SELECT name\n  FROM db\n  WHERE ANY v\n          IN contacts\n          SATISFIES v.city = 'San Mateo'\n        END\n
          "},{"location":"n1ql-query-strings/#parameter-expressions","title":"Parameter Expressions","text":""},{"location":"n1ql-query-strings/#purpose_18","title":"Purpose","text":"

          Parameter expressions specify a value to be assigned from the parameter map presented when executing the query.

          Note

          If parameters are specified in the query string, but the parameter and value mapping is not specified in the query object, an error will be thrown when executing the query.

          "},{"location":"n1ql-query-strings/#syntax_18","title":"Syntax","text":"
          $IDENTIFIER\n
          "},{"location":"n1ql-query-strings/#examples_5","title":"Examples","text":"

          Example 13. Parameter Expression

          SELECT name\n  FROM db\n  WHERE department = $department\n

          Example 14. Using a Parameter

          val query = database.createQuery(\"SELECT name WHERE department = \\$department\")\nquery.parameters = Parameters().setValue(\"department\", \"E001\")\nval result = query.execute()\n

          The query resolves to SELECT name WHERE department = \"E001\"

          "},{"location":"n1ql-query-strings/#parenthesis-expressions","title":"Parenthesis Expressions","text":""},{"location":"n1ql-query-strings/#purpose_19","title":"Purpose","text":"

          Use parentheses to group expressions together to make them more readable or to establish operator precedences.

          "},{"location":"n1ql-query-strings/#example_13","title":"Example","text":"

          Example 15. Parenthesis Expression

          -- Establish the desired operator precedence; do the addition before the multiplication\nSELECT (value1 + value2) * value 3\n  FROM db\n\nSELECT *\n  FROM db\n  WHERE ((value1 + value2) * value3) + value4 = 10\n\nSELECT *\n  FROM db\n  -- Clarify the conditional grouping\n  WHERE (value1 = value2)\n     OR (value3 = value4)\n
          "},{"location":"n1ql-query-strings/#operators","title":"Operators","text":"

          In this section Binary Operators | Unary Operators | COLLATE Operators | CONDITIONAL Operator

          "},{"location":"n1ql-query-strings/#binary-operators","title":"Binary Operators","text":"

          Maths | Comparison Operators | Logical Operators | String Operator

          "},{"location":"n1ql-query-strings/#maths","title":"Maths","text":"

          Table 2. Maths Operators

          Op Desc Example + Add WHERE v1 + v2 = 10 - Subtract WHERE v1 - v2 = 10 * Multiply WHERE v1 * v2 = 10 / Divide \u2014 see note \u00b9 WHERE v1 / v2 = 10 % Modulo WHERE v1 % v2 = 0

          \u00b9 If both operands are integers, integer division is used, but if one is a floating number, then float division is used. This differs from Server SQL++, which performs float division regardless. Use DIV(x, y) to force float division in CBL SQL++.

          "},{"location":"n1ql-query-strings/#comparison-operators","title":"Comparison Operators","text":""},{"location":"n1ql-query-strings/#purpose_20","title":"Purpose","text":"

          The comparison operators are used in the WHERE statement to specify the condition on which to match documents.

          Table 3. Comparison Operators

          Op Desc Example = or == Equals WHERE v1 = v2WHERE v1 == v2 != or <> Not Equal to WHERE v1 != v2WHERE v1 <> v2 > Greater than WHERE v1 > v2 >= Greater than or equal to WHERE v1 >= v2 > Less than WHERE v1 < v2 >= Less than or equal to WHERE v1 \u21d0 v2 IN Returns TRUE if the value is in the list or array of values specified by the right hand side expression; Otherwise returns FALSE. WHERE \"James\" IN contactsList LIKE String wildcard pattern matching \u00b2 comparison. Two wildcards are supported:
          • % Matches zero or more characters.
          • _ Matches a single character.
          WHERE name LIKE 'a%'WHERE name LIKE '%a'WHERE name LIKE '%or%'WHERE name LIKE 'a%o%'WHERE name LIKE '%_r%'WHERE name LIKE '%a_%'WHERE name LIKE '%a__%'WHERE name LIKE 'aldo' MATCH String matching using FTS see Full Text Search Functions WHERE v1-index MATCH \"value\" BETWEEN Logically equivalent to v1>=X and v1<=Y WHERE v1 BETWEEN 10 and 100 IS NULL \u00b3 Equal to NULL WHERE v1 IS NULL IS NOT NULL Not equal to NULL WHERE v1 IS NOT NULL IS MISSING Equal to MISSING WHERE v1 IS MISSING IS NOT MISSING Not equal to MISSING WHERE v1 IS NOT MISSING IS VALUED IS NOT NULL AND MISSING WHERE v1 IS VALUED IS NOT VALUED IS NULL OR MISSING WHERE v1 IS NOT VALUED

          \u00b2 Matching is case-insensitive for ASCII characters, case-sensitive for non-ASCII.

          \u00b3 Use of IS and IS NOT is limited to comparing NULL and MISSING values (this encompasses VALUED). This is different from QueryBuilder, in which they operate as equivalents of == and !=.

          Table 4. Comparing NULL and MISSING values using IS

          OP NON-NULL Value NULL MISSING IS NULL FALSE TRUE MISSING IS NOT NULL TRUE FALSE MISSING IS MISSING FALSE FALSE TRUE IS NOT MISSING TRUE TRUE FALSE IS VALUED TRUE FALSE FALSE IS NOT VALUED FALSE TRUE TRUE"},{"location":"n1ql-query-strings/#logical-operators","title":"Logical Operators","text":""},{"location":"n1ql-query-strings/#purpose_21","title":"Purpose","text":"

          Logical operators combine expressions using the following Boolean Logic Rules:

          • TRUE is TRUE, and FALSE is FALSE
          • Numbers 0 or 0.0 are FALSE
          • Arrays and dictionaries are FALSE
          • String and Blob are TRUE if the values are casted as a non-zero or FALSE if the values are casted as 0 or 0.0
          • NULL is FALSE
          • MISSING is MISSING

          Note

          This is different from Server SQL++, where:

          • MISSING, NULL and FALSE are FALSE
          • Numbers 0 is FALSE
          • Empty strings, arrays, and objects are FALSE
          • All other values are TRUE

          Tip

          Use TOBOOLEAN(expr) function to convert a value based on Server SQL++ boolean value rules.

          Table 5. Logical Operators

          Op Description Example AND Returns TRUE if the operand expressions evaluate to TRUE; otherwise FALSE.If an operand is MISSING and the other is TRUE returns MISSING, if the other operand is FALSE it returns FALSE.If an operand is NULL and the other is TRUE returns NULL, if the other operand is FALSE it returns FALSE. WHERE city = \"San Francisco\" AND status = true OR Returns TRUE if one of the operand expressions is evaluated to TRUE; otherwise returns FALSE.If an operand is MISSING, the operation will result in MISSING if the other operand is FALSE or TRUE if the other operand is TRUE.If an operand is NULL, the operation will result in NULL if the other operand is FALSE or TRUE if the other operand is TRUE. WHERE city = \u201cSan Francisco\u201d OR city = \"Santa Clara\"

          Table 6. Logical Operation Table

          a b a AND b a OR b TRUE TRUE TRUE TRUE FALSE FALSE TRUE NULL FALSE \u2075\u207b\u00b9 TRUE MISSING MISSING TRUE FALSE TRUE FALSE TRUE FALSE FALSE FALSE NULL FALSE FALSE \u2075\u207b\u00b9 MISSING FALSE MISSING NULL TRUE FALSE \u2075\u207b\u00b9 TRUE FALSE FALSE FALSE \u2075\u207b\u00b9 NULL FALSE \u2075\u207b\u00b9 FALSE \u2075\u207b\u00b9 MISSING FALSE \u2075\u207b\u00b2 MISSING \u2075\u207b\u00b3 MISSING TRUE MISSING TRUE FALSE FALSE MISSING NULL FALSE \u2075\u207b\u00b2 MISSING \u2075\u207b\u00b3 MISSING MISSING MISSING

          Note

          This differs from Server SQL++ in the following instances: \u2075\u207b\u00b9 Server will return: NULL instead of FALSE \u2075\u207b\u00b2 Server will return: MISSING instead of FALSE \u2075\u207b\u00b3 Server will return: NULL instead of MISSING

          "},{"location":"n1ql-query-strings/#string-operator","title":"String Operator","text":""},{"location":"n1ql-query-strings/#purpose_22","title":"Purpose","text":"

          A single string operator is provided. It enables string concatenation.

          Table 7. String Operators

          Op Description Example || Concatenating SELECT firstnm || lastnm AS fullname FROM db"},{"location":"n1ql-query-strings/#unary-operators","title":"Unary Operators","text":""},{"location":"n1ql-query-strings/#purpose_23","title":"Purpose","text":"

          Three unary operators are provided. They operate by modifying an expression, making it numerically positive or negative, or by logically negating its value (TRUE becomes FALSE).

          "},{"location":"n1ql-query-strings/#syntax_19","title":"Syntax","text":"
          // UNARY_OP _ expr\n

          Table 8. Unary Operators

          Op Description Example + Positive value WHERE v1 = +10 - Negative value WHERE v1 = -10 NOT Logical Negate operator * WHERE \"James\" NOT IN contactsList

          * The NOT operator is often used in conjunction with operators such as IN, LIKE, MATCH, and BETWEEN operators. NOT operation on NULL value returns NULL. NOT operation on MISSING value returns MISSING.

          Table 9. NOT Operation TABLE

          a NOT a TRUE FALSE FALSE TRUE NULL FALSE MISSING MISSING"},{"location":"n1ql-query-strings/#collate-operators","title":"COLLATE Operators","text":""},{"location":"n1ql-query-strings/#purpose_24","title":"Purpose","text":"

          Collate operators specify how the string comparison is conducted.

          "},{"location":"n1ql-query-strings/#usage","title":"Usage","text":"

          The collate operator is used in conjunction with string comparison expressions and ORDER BY clauses. It allows for one or more collations.

          If multiple collations are used, the collations need to be specified in a parenthesis. When only one collation is used, the parenthesis is optional.

          Note

          Collate is not supported by Server SQL++

          "},{"location":"n1ql-query-strings/#syntax_20","title":"Syntax","text":"
          collate = COLLATE collation | '(' collation (_ collation )* ')'\n\ncollation = NO? (UNICODE | CASE | DIACRITICS) WB\n
          "},{"location":"n1ql-query-strings/#arguments_8","title":"Arguments","text":"

          The available collation options are:

          • UNICODE: Conduct a Unicode comparison; the default is to do ASCII comparison.
          • CASE: Conduct case-sensitive comparison.
          • DIACRITIC: Take account of accents and diacritics in the comparison; on by default.
          • NO: This can be used as a prefix to the other collations, to disable them (for example: NOCASE to enable case-insensitive comparison)
          "},{"location":"n1ql-query-strings/#example_14","title":"Example","text":"
          SELECT department FROM db WHERE (name = \"fred\") COLLATE UNICODE\n
          SELECT department FROM db WHERE (name = \"fred\")\nCOLLATE (UNICODE)\n
          SELECT department FROM db WHERE (name = \"fred\") COLLATE (UNICODE CASE)\n
          SELECT name FROM db ORDER BY name COLLATE (UNICODE DIACRITIC)\n
          "},{"location":"n1ql-query-strings/#conditional-operator","title":"CONDITIONAL Operator","text":""},{"location":"n1ql-query-strings/#purpose_25","title":"Purpose","text":"

          The Conditional (or CASE) operator evaluates conditional logic in a similar way to the IF/ELSE operator.

          "},{"location":"n1ql-query-strings/#syntax_21","title":"Syntax","text":"
          CASE (expression) (WHEN expression THEN expression)+ (ELSE expression)? END\n\nCASE (expression)? (!WHEN expression)?\n  (WHEN expression THEN expression)+ (ELSE expression)? END\n

          Both Simple Case and Searched Case expressions are supported. The syntactic difference being that the Simple Case expression has an expression after the CASE keyword.

          1. Simple Case Expression
            • If the CASE expression is equal to the first WHEN expression, the result is the THEN expression.
            • Otherwise, any subsequent WHEN clauses are evaluated in the same way.
            • If no match is found, the result of the CASE expression is the ELSE expression, NULL if no ELSE expression was provided.
          2. Searched Case Expression
            • If the first WHEN expression is TRUE, the result of this expression is its THEN expression.
            • Otherwise, subsequent WHEN clauses are evaluated in the same way. If no WHEN clause evaluate to TRUE, then the result of the expression is the ELSE expression, or NULL if no ELSE expression was provided.
          "},{"location":"n1ql-query-strings/#example_15","title":"Example","text":"

          Example 16. Simple Case

          SELECT CASE state WHEN \u2018CA\u2019 THEN \u2018Local\u2019 ELSE \u2018Non-Local\u2019 END FROM DB\n

          Example 17. Searched Case

          SELECT CASE WHEN shippedOn IS NOT NULL THEN \u2018SHIPPED\u2019 ELSE \"NOT-SHIPPED\" END FROM db\n
          "},{"location":"n1ql-query-strings/#functions","title":"Functions","text":"

          In this section Aggregation Functions | Array Functions | Conditional Functions | Date and Time Functions | Full Text Search Functions | Maths Functions | Metadata Functions | Pattern Searching Functions | String Functions | Type Checking Functions | Type Conversion Functions

          "},{"location":"n1ql-query-strings/#purpose_26","title":"Purpose","text":"

          Functions are also expressions.

          "},{"location":"n1ql-query-strings/#syntax_22","title":"Syntax","text":"

          The function syntax is the same as Java\u2019s method syntax. It starts with the function name, followed by optional arguments inside parentheses.

          function = functionName parenExprs\n\nfunctionName  = IDENTIFIER\n\nparenExprs = '(' ( expression (_ ',' _ expression )* )? ')'\n
          "},{"location":"n1ql-query-strings/#aggregation-functions","title":"Aggregation Functions","text":"

          Table 10. Aggregation Functions

          Function Description AVG(expr) Returns average value of the number values in the group COUNT(expr) Returns a count of all values in the group MIN(expr) Returns the minimum value in the group MAX(expr) Returns the maximum value in the group SUM(expr) Returns the sum of all number values in the group"},{"location":"n1ql-query-strings/#array-functions","title":"Array Functions","text":"

          Table 11. Array Functions

          Function Description ARRAY_AGG(expr) Returns an array of the non-MISSING group values in the input expression, including NULL values. ARRAY_AVG(expr) Returns the average of all non-NULL number values in the array; or NULL if there are none ARRAY_CONTAINS(expr) Returns TRUE if the value exists in the array; otherwise FALSE ARRAY_COUNT(expr) Returns the number of non-null values in the array ARRAY_IFNULL(expr) Returns the first non-null value in the array ARRAY_MAX(expr) Returns the largest non-NULL, non_MISSING value in the array ARRAY_MIN(expr) Returns the smallest non-NULL, non_MISSING value in the array ARRAY_LENGTH(expr) Returns the length of the array ARRAY_SUM(expr) Returns the sum of all non-NULL numeric value in the array"},{"location":"n1ql-query-strings/#conditional-functions","title":"Conditional Functions","text":"

          Table 12. Conditional Functions

          Function Description IFMISSING(expr1, expr2, \u2026) Returns the first non-MISSING value, or NULL if all values are MISSING IFMISSINGRONULL(expr1, expr2, \u2026) Returns the first non-NULL and non-MISSING value, or NULL if all values are NULL or MISSING IFNULL(expr1, expr2, \u2026) Returns the first non-NULL, or NULL if all values are NULL MISSINGIF(expr1, expr2) Returns MISSING when expr1 = expr2; otherwise returns expr1.Returns MISSING if either or both expressions are MISSING.Returns NULL if either or both expressions are NULL.+ NULLF(expr1, expr2) Returns NULL when expr1 = expr2; otherwise returns expr1.Returns MISSING if either or both expressions are MISSING.Returns NULL if either or both expressions are NULL.+"},{"location":"n1ql-query-strings/#date-and-time-functions","title":"Date and Time Functions","text":"

          Table 13. Date and Time Functions

          Function Description STR_TO_MILLIS(expr) Returns the number of milliseconds since the unix epoch of the given ISO 8601 date input string. STR_TO_UTC(expr) Returns the ISO 8601 UTC date time string of the given ISO 8601 date input string. MILLIS_TO_STR(expr) Returns a ISO 8601 date time string in device local timezone of the given number of milliseconds since the unix epoch expression. MILLIS_TO_UTC(expr) Returns the UTC ISO 8601 date time string of the given number of milliseconds since the unix epoch expression."},{"location":"n1ql-query-strings/#full-text-search-functions","title":"Full Text Search Functions","text":"

          Table 14. FTS Functions

          Function Description Example MATCH(indexName, term) Returns TRUE if term expression matches the FTS indexed term. indexName identifies the FTS index, term expression to search for matching. WHERE MATCH (description, \u201ccouchbase\u201d) RANK(indexName) Returns a numeric value indicating how well the current query result matches the full-text query when performing the MATCH. indexName is an IDENTIFIER for the FTS index. WHERE MATCH (description, \u201ccouchbase\u201d) ORDER BY RANK(description)"},{"location":"n1ql-query-strings/#maths-functions","title":"Maths Functions","text":"

          Table 15. Maths Functions

          Function Description ABS(expr) Returns the absolute value of a number. ACOS(expr) Returns the arc cosine in radians. ASIN(expr) Returns the arcsine in radians. ATAN(expr) Returns the arctangent in radians. ATAN2(expr1,expr2) Returns the arctangent of expr1/expr2. CEIL(expr) Returns the smallest integer not less than the number. COS(expr) Returns the cosine value of the expression. DIV(expr1, expr2) Returns float division of expr1 and expr2.Both expr1 and expr2 are cast to a double number before division.The returned result is always a double. DEGREES(expr) Converts radians to degrees. E() Returns base of natural logarithms. EXP(expr) Returns expr value FLOOR(expr) Returns largest integer not greater than the number. IDIV(expr1, expr2) Returns integer division of expr1 and expr2. LN(expr) Returns log base e value. LOG(expr) Returns log base 10 value. PI() Return PI value. POWER(expr1, expr2) Returns expr1expr2 value. RADIANS(expr) Returns degrees to radians. ROUND(expr (, digits_expr)?) Returns the rounded value to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits are 0 if not given.The function uses Rounding Away From Zero convention to round midpoint values to the next number away from zero (so, for example, ROUND(1.75) returns 1.8 but ROUND(1.85) returns 1.9. * ROUND_EVEN(expr (, digits_expr)?) Returns rounded value to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits are 0 if not given.The function uses Rounding to Nearest Even (Banker\u2019s Rounding) convention which rounds midpoint values to the nearest even number (for example, both ROUND_EVEN(1.75) and ROUND_EVEN(1.85) return 1.8). SIGN(expr) Returns -1 for negative, 0 for zero, and 1 for positive numbers. SIN(expr) Returns sine value. SQRT(expr) Returns square root value. TAN(expr) Returns tangent value. TRUNC (expr (, digits, expr)?) Returns a truncated number to the given number of integer digits to the right of the decimal point (left if digits is negative). Digits are 0 if not given.

          * The behavior of the ROUND() function is different from Server SQL++ ROUND(), which rounds the midpoint values using Rounding to Nearest Even convention.

          "},{"location":"n1ql-query-strings/#metadata-functions","title":"Metadata Functions","text":"

          Table 16. Metadata Functions

          Function Description Example META(dataSourceName?) Returns a dictionary containing metadata properties including:
          • id : document identifier
          • sequence : document mutating sequence number
          • deleted : flag indicating whether document is deleted or not
          • expiration : document expiration date in timestamp formatThe optional dataSourceName identifies the database or the database alias name.
          To access a specific metadata property, use the dot expression. SELECT META() FROM dbSELECT META().id, META().sequence, META().deleted, META().expiration FROM dbSELECT p.name, r.rating FROM product as p INNER JOIN reviews AS r ON META(r).id IN p.reviewList WHERE META(p).id = \"product320\""},{"location":"n1ql-query-strings/#pattern-searching-functions","title":"Pattern Searching Functions","text":"

          Table 17. Pattern Searching Functions

          Function Description REGEXP_CONTAINS(expr, pattern) Returns TRUE if the string value contains any sequence that matches the regular expression pattern. REGEXP_LIKE(expr, pattern) Return TRUE if the string value exactly matches the regular expression pattern. REGEXP_POSITION(expr, pattern) Returns the first position of the occurrence of the regular expression pattern within the input string expression. Return -1 if no match is found. Position counting starts from zero. REGEXP_REPLACE(expr, pattern, repl [, n]) Returns new string with occurrences of pattern replaced with repl. If n is given, at the most n replacements are performed. If n is not given, all matching occurrences are replaced."},{"location":"n1ql-query-strings/#string-functions","title":"String Functions","text":"

          Table 18. String Functions

          Function Description CONTAINS(expr, substring_expr) Returns true if the substring exists within the input string, otherwise returns false. LENGTH(expr) Returns the length of a string. The length is defined as the number of characters within the string. LOWER(expr) Returns the lowercase string of the input string. LTRIM(expr) Returns the string with all leading whitespace characters removed. RTRIM(expr) Returns the string with all trailing whitespace characters removed. TRIM(expr) Returns the string with all leading and trailing whitespace characters removed. UPPER(expr) Returns the uppercase string of the input string."},{"location":"n1ql-query-strings/#type-checking-functions","title":"Type Checking Functions","text":"

          Table 19. Type Checking Functions

          Function Description ISARRAY(expr) Returns TRUE if expression is an array, otherwise returns MISSING, NULL or FALSE. ISATOM(expr) Returns TRUE if expression is a Boolean, number, or string, otherwise returns MISSING, NULL or FALSE. ISBOOLEAN(expr) Returns TRUE if expression is a Boolean, otherwise returns MISSING, NULL or FALSE. ISNUMBER(expr) Returns TRUE if expression is a number, otherwise returns MISSING, NULL or FALSE. ISOBJECT(expr) Returns TRUE if expression is an object (dictionary), otherwise returns MISSING, NULL or FALSE. ISSTRING(expr) Returns TRUE if expression is a string, otherwise returns MISSING, NULL or FALSE. TYPE(expr) Returns one of the following strings, based on the value of expression:
          • \u201cmissing\u201d
          • \u201cnull\u201d
          • \u201cboolean\u201d
          • \u201cnumber\u201d
          • \u201cstring\u201d
          • \u201carray\u201d
          • \u201cobject\u201d
          • \u201cbinary\u201d
          "},{"location":"n1ql-query-strings/#type-conversion-functions","title":"Type Conversion Functions","text":"

          Table 20. Type Conversion Functions

          Function Description TOARRAY(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns the array itself.Returns all other values wrapped in an array. TOATOM(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns an array of a single item if the value is an array.Returns an object of a single key/value pair if the value is an object.Returns boolean, numbers, or stringsReturns NULL for all other values. TOBOOLEAN(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns FALSE if the value is FALSE.Returns FALSE if the value is 0 or NaN.Returns FALSE if the value is an empty string, array, and object.Return TRUE for all other values. TONUMBER(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns 0 if the value is FALSE.Returns 1 if the value is TRUE.Returns NUMBER if the value is NUMBER.Returns NUMBER parsed from the string value.Returns NULL for all other values. TOOBJECT(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns the object if the value is an object.Returns an empty object for all other values. TOSTRING(expr) Returns MISSING if the value is MISSING.Returns NULL if the value is NULL.Returns \u201cfalse\u201d if the value is FALSE.Returns \u201ctrue\u201d if the value is TRUE.Returns NUMBER in String if the value is NUMBER.Returns the string value if the value is a string.Returns NULL for all other values."},{"location":"n1ql-query-strings/#querybuilder-differences","title":"QueryBuilder Differences","text":"

          Couchbase Lite SQL++ Query supports all QueryBuilder features, except Predictive Query and Index. See Table 21 for the features supported by SQL++ but not by QueryBuilder.

          Table 21. QueryBuilder Differences

          Category Components Conditional Operator CASE(WHEN \u2026 THEN \u2026 ELSE ..) Array Functions ARRAY_AGGARRAY_AVGARRAY_COUNTARRAY_IFNULLARRAY_MAXARRAY_MINARRAY_SUM Conditional Functions IFMISSINGIFMISSINGORNULLIFNULLMISSINGIFNULLIF Math Functions DIVIDIVROUND_EVEN Pattern Matching Functions REGEXP_CONTAINSREGEXP_LIKEREGEXP_POSITIONREGEXP_REPLACE Type Checking Functions ISARRAYISATOMISBOOLEANISNUMBERISOBJECTISSTRING TYPE Type Conversion Functions TOARRAYTOATOMTOBOOLEANTONUMBERTOOBJECTTOSTRING"},{"location":"n1ql-query-strings/#query-parameters","title":"Query Parameters","text":"

          You can provide runtime parameters to your SQL++ query to make it more flexible.

          To specify substitutable parameters within your query string prefix the name with $, $type \u2014 see Example 18.

          Example 18. Running a SQL++ Query

          val query = database.createQuery(\n    \"SELECT META().id AS id FROM _ WHERE type = \\$type\"\n) \n\nquery.parameters = Parameters().setString(\"type\", \"hotel\") \n\nreturn query.execute().allResults()\n
          1. Define a parameter placeholder $type
          2. Set the value of the $type parameter
          "},{"location":"n1ql-server-differences/","title":"SQL++ Server Differences","text":"

          Differences between Couchbase Server SQL++ and Couchbase Lite SQL++

          Important

          N1QL is Couchbase\u2019s implementation of the developing SQL++ standard. As such the terms N1QL and SQL++ are used interchangeably in Couchbase documentation unless explicitly stated otherwise.

          There are several minor but notable behavior differences between SQL++ for Mobile queries and SQL++ for Server, as shown in Table 1.

          In some instances, if required, you can force SQL++ for Mobile to work in the same way as SQL++ for Server. This table compares Couchbase Server and Mobile instances:

          Table 1. SQL++ Query Comparison

          Feature SQL++ for Couchbase Server SQL++ for Mobile Scopes and Collections SELECT *FROM travel-sample.inventory.airport SELECT *FROM inventory.airport Scopes and Collections SELECT *FROM travel-sample.inventory.airport SELECT *FROM inventory.airport USE KEYS SELECT fname, email FROM tutorial USE KEYS [\"dave\", \"ian\"]; SELECT fname, email FROM tutorial WHERE meta().id IN (\"dave\", \"ian\"); ON KEYS SELECT * FROM `user` uJOIN orders o ON KEYS ARRAY s.order_idFOR s IN u.order_history END; SELECT * FROM user u, u.order_history sJOIN orders o ON s.order_id = meta(o).id; ON KEY SELECT * FROM `user` uJOIN orders o ON KEY o.user_id FOR u; SELECT * FROM user uJOIN orders o ON meta(u).id = o.user_id; NEST SELECT * FROM `user` uNEST orders ordersON KEYS ARRAY s.order_idFOR s IN u.order_history END; NEST/UNNEST not supported LEFT OUTER NEST SELECT * FROM user uLEFT OUTER NEST orders ordersON KEYS ARRAY s.order_idFOR s IN u.order_history END; NEST/UNNEST not supported ARRAY ARRAY i FOR i IN [1, 2] END (SELECT VALUE i FROM [1, 2] AS i) ARRAY FIRST FIRST v FOR v IN arr arr[0] LIMIT l OFFSET o Does not allow OFFSET without LIMIT Allows OFFSET without LIMIT UNION, INTERSECT, and EXCEPT All three are supported (with ALL and DISTINCT variants) Not supported OUTER JOIN Both LEFT and RIGHT OUTER JOIN supported Only LEFT OUTER JOIN supported (and necessary for query expressability) <, <=, =, etc. operators Can compare either complex values or scalar values Only scalar values may be compared ORDER BY Result sequencing is based on specific rules described in SQL++ (server) OrderBy clause Result sequencing is based on the SQLite ordering described in SQLite select overviewThe ordering of Dictionary and Array objects is based on binary ordering. SELECT DISTINCT Supported SELECT DISTINCT VALUE is supported when the returned values are scalars CREATE INDEX Supported Not Supported INSERT/\u200bUPSERT/\u200bDELETE Supported Not Supported"},{"location":"n1ql-server-differences/#boolean-logic-rules","title":"Boolean Logic Rules","text":"SQL++ for Couchbase Server SQL++ for Mobile Couchbase Server operates in the same way as Couchbase Lite, except:
          • MISSING, NULL and FALSE are FALSE
          • Numbers 0 is FALSE
          • Empty strings, arrays, and objects are FALSE
          • All other values are TRUE
          You can choose to use Couchbase Server\u2019s SQL++ rules by using the TOBOOLEAN(expr) function to convert a value to its boolean value. SQL++ for Mobile\u2019s boolean logic rules are based on SQLite\u2019s, so:
          • TRUE is TRUE, and FALSE is FALSE
          • Numbers 0 or 0.0 are FALSE
          • Arrays and dictionaries are FALSE
          • String and Blob are TRUE if the values are casted as a non-zero or FALSE if the values are casted as 0 or 0.0 \u2014 see SQLITE\u2019s CAST and Boolean expressions for more details)
          • NULL is FALSE
          • MISSING is MISSING
          "},{"location":"n1ql-server-differences/#logical-operations","title":"Logical Operations","text":"

          In SQL++ for Mobile logical operations will return one of three possible values: TRUE, FALSE, or MISSING.

          Logical operations with the MISSING value could result in TRUE or FALSE if the result can be determined regardless of the missing value, otherwise the result will be MISSING.

          In SQL++ for Mobile \u2014 unlike SQL++ for Server \u2014 NULL is implicitly converted to FALSE before evaluating logical operations. Table 2 summarizes the result of logical operations with different operand values and also shows where the Couchbase Server behavior differs.

          Table 2. Logical Operations Comparison

          Operanda SQL++ for Mobile SQL++ for Server b a AND b a OR b b a AND b a OR b TRUE TRUE TRUE TRUE - - - FALSE FALSE TRUE - - - NULL FALSE TRUE - NULL - MISSING MISSING TRUE - - - FALSE TRUE FALSE TRUE - - - FALSE FALSE FALSE - - - NULL FALSE FALSE - - NULL MISSING FALSE MISSING - - - NULL TRUE FALSE TRUE - NULL - FALSE FALSE FALSE - - NULL NULL FALSE FALSE - NULL NULL MISSING FALSE MISSING - MISSING NULL MISSING TRUE MISSING TRUE - - - FALSE FALSE MISSING - - - NULL FALSE MISSING - MISSING NULL MISSING MISSING MISSING - - -"},{"location":"n1ql-server-differences/#crud-operations","title":"CRUD Operations","text":"

          SQL++ for Mobile only supports Read or Query operations.

          SQL++ for Server fully supports CRUD operation.

          "},{"location":"n1ql-server-differences/#functions","title":"Functions","text":""},{"location":"n1ql-server-differences/#division-operator","title":"Division Operator","text":"SQL++ for Server SQL++ for Mobile SQL++ for Server always performs float division regardless of the types of the operands.You can force this behavior in SQL++ for Mobile by using the DIV(x, y) function. The operand types determine the division operation performed.If both are integers, integer division is used.If one is a floating number, then float division is used."},{"location":"n1ql-server-differences/#round-function","title":"Round Function","text":"SQL++ for Server SQL++ for Mobile SQL++ for Server ROUND() uses the Rounding to Nearest Even convention (for example, ROUND(1.85) returns 1.8).You can force this behavior in Couchbase Lite by using the ROUND_EVEN() function. The ROUND() function returns a value to the given number of integer digits to the right of the decimal point (left if digits is negative).
          • Digits are 0 if not given.
          • Midpoint values are handled using the Rounding Away From Zero convention, which rounds them to the next number away from zero (for example, ROUND(1.85) returns 1.9).
          "},{"location":"paging/","title":"Paging","text":"

          The paging extensions are built on Cash App's Multiplatform Paging, Google's AndroidX Paging with multiplatform support. Kotbase Paging provides a PagingSource which performs limit/offset paging queries based on a user-supplied database query.

          "},{"location":"paging/#installation","title":"Installation","text":"Enterprise EditionCommunity Edition build.gradle.kts
          kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-ee-paging:3.1.3-1.1.0\")\n        }\n    }\n}\n
          build.gradle.kts
          kotlin {\n    sourceSets {\n        commonMain.dependencies {\n            implementation(\"dev.kotbase:couchbase-lite-paging:3.1.3-1.1.0\")\n        }\n    }\n}\n
          "},{"location":"paging/#usage","title":"Usage","text":"
          // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval select = select(Meta.id, \"type\", \"name\")\nval mapper = { json: String ->\n    Json.decodeFromString<Hotel>(json)\n}\nval queryProvider: From.() -> LimitRouter = {\n    where {\n        (\"type\" equalTo \"hotel\") and\n        (\"state\" equalTo \"California\")\n    }\n    .orderBy { \"name\".ascending() }\n}\n\nval pagingSource = QueryPagingSource(\n    EmptyCoroutineContext,\n    select,\n    collection,\n    mapper,\n    queryProvider\n)\n
          "},{"location":"passive-peer/","title":"Passive Peer","text":"

          How to set up a listener to accept a replicator connection and sync using peer-to-peer

          Android enablers

          Allow Unencrypted Network Traffic

          To use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

          iOS Restrictions

          iOS 14 Applications

          When your application attempts to access the user\u2019s local network, iOS will prompt them to allow (or deny) access. You can customize the message presented to the user by editing the description for the NSLocalNetworkUsageDescription key in the Info.plist.

          Use Background Threads

          As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

          Code Snippets

          All code examples are indicative only. They demonstrate the basic concepts and approaches to using a feature. Use them as inspiration and adapt these examples to best practice when developing applications for your platform.

          "},{"location":"passive-peer/#introduction","title":"Introduction","text":"

          This is an Enterprise Edition feature.

          This content provides code and configuration examples covering the implementation of Peer-to-Peer Sync over WebSockets. Specifically, it covers the implementation of a Passive Peer.

          Couchbase\u2019s Passive Peer (also referred to as the server, or listener) will accept a connection from an Active Peer (also referred to as the client or replicator) and replicate database changes to synchronize both databases.

          Subsequent sections provide additional details and examples for the main configuration options.

          Secure Storage

          The use of TLS, its associated keys and certificates requires using secure storage to minimize the chances of a security breach. The implementation of this storage differs from platform to platform \u2014 see Using Secure Storage.

          "},{"location":"passive-peer/#configuration-summary","title":"Configuration Summary","text":"

          You should configure and initialize a listener for each Couchbase Lite database instance you want to sync. There is no limit on the number of listeners you may configure \u2014 Example 1 shows a simple initialization and configuration process.

          Example 1. Listener configuration and initialization

          val listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        collections = collections,\n        port = 55990,\n        networkInterface = \"wlan0\",\n\n        enableDeltaSync = false,\n\n        // Configure server security\n        disableTls = false,\n\n        // Use an Anonymous Self-Signed Cert\n        identity = null,\n\n        // Configure Client Security using an Authenticator\n        // For example, Basic Authentication\n        authenticator = ListenerPasswordAuthenticator { usr, pwd ->\n            (usr === validUser) && (pwd.concatToString() == validPass)\n        }\n    )\n)\n\n// Start the listener\nlistener.start()\n
          1. Identify the collections from the local database to be used \u2014 see Initialize the Listener Configuration
          2. Optionally, choose a port to use. By default, the system will automatically assign a port \u2014 to override this, see Set Port and Network Interface
          3. Optionally, choose a network interface to use. By default, the system will listen on all network interfaces \u2014 to override this see Set Port and Network Interface
          4. Optionally, choose to sync only changes. The default is not to enable delta-sync \u2014 see Delta Sync
          5. Set server security. TLS is always enabled instantly, so you can usually omit this line. But you can, optionally, disable TLS (not advisable in production) \u2014 see TLS Security
          6. Set the credentials this server will present to the client for authentication. Here we show the default TLS authentication, which is an anonymous self-signed certificate. The server must always authenticate itself to the client.
          7. Set client security \u2014 define the credentials the server expects the client to present for authentication. Here we show how basic authentication is configured to authenticate the client-supplied credentials from the http authentication header against valid credentials \u2014 see Authenticating the Client for more options. Note that client authentication is optional.
          8. Initialize the listener using the configuration settings.
          9. Start Listener
          "},{"location":"passive-peer/#device-discovery","title":"Device Discovery","text":"

          This phase is optional: If the listener is initialized on a well-known URL endpoint (for example, a static IP address or well-known DNS address) then you can configure Active Peers to connect to those.

          Before initiating the listener, you may execute a peer discovery phase. For the Passive Peer, this involves advertising the service using, for example, Network Service Discovery on Android or Bonjour on iOS and waiting for an invite from the Active Peer. The connection is established once the Passive Peer has authenticated and accepted an Active Peer\u2019s invitation.

          "},{"location":"passive-peer/#initialize-the-listener-configuration","title":"Initialize the Listener Configuration","text":"

          Initialize the listener configuration with the collections to sync from the local database \u2014 see Example 2. All other configuration values take their default setting.

          Each listener instance serves one Couchbase Lite database. Couchbase sets no hard limit on the number of listeners you can initialize.

          Example 2. Specify Local Database

          collections = collections,\n

          Set the local database using the URLEndpointListenerConfiguration's constructor URLEndpointListenerConfiguration(Database). The database must be opened before the listener is started.

          "},{"location":"passive-peer/#set-port-and-network-interface","title":"Set Port and Network Interface","text":""},{"location":"passive-peer/#port-number","title":"Port number","text":"

          The Listener will automatically select an available port if you do not specify one \u2014 see Example 3 for how to specify a port.

          Example 3. Specify a port

          port = 55990,\n

          To use a canonical port \u2014 one known to other applications \u2014 specify it explicitly using the port property shown here. Ensure that firewall rules do not block any port you do specify.

          "},{"location":"passive-peer/#network-interface","title":"Network Interface","text":"

          The listener will listen on all network interfaces by default.

          Example 4. Specify a Network Interface to Use

          networkInterface = \"wlan0\",\n

          To specify an interface \u2014 one known to other applications \u2014 identify it explicitly, using the networkInterface property shown here. This must be either an IP address or network interface name such as en0.

          "},{"location":"passive-peer/#delta-sync","title":"Delta Sync","text":"

          Delta Sync allows clients to sync only those parts of a document that have changed. This can result in significant bandwidth consumption savings and throughput improvements. Both are valuable benefits, especially when network bandwidth is constrained.

          Example 5. Enable delta sync

          enableDeltaSync = false,\n

          Delta sync replication is not enabled by default. Use URLEndpointListenerConfiguration's isDeltaSyncEnabled property to activate or deactivate it.

          "},{"location":"passive-peer/#tls-security","title":"TLS Security","text":""},{"location":"passive-peer/#enable-or-disable-tls","title":"Enable or Disable TLS","text":"

          Define whether the connection is to use TLS or clear text.

          TLS-based encryption is enabled by default, and this setting ought to be used in any production environment. However, it can be disabled. For example, for development or test environments.

          When TLS is enabled, Couchbase Lite provides several options on how the listener may be configured with an appropriate TLS Identity \u2014 see Configure TLS Identity for Listener.

          Note

          On the Android platform, to use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

          You can use URLEndpointListenerConfiguration's isTlsDisabled method to disable TLS communication if necessary.

          The isTlsDisabled setting must be false when Client Cert Authentication is required.

          Basic Authentication can be used with, or without, TLS.

          isTlsDisabled works in conjunction with TLSIdentity, to enable developers to define the key and certificate to be used.

          • If isTlsDisabled is true \u2014 TLS communication is disabled and TLS identity is ignored. Active peers will use the ws:// URL scheme used to connect to the listener.
          • If isTlsDisabled is false or not specified \u2014 TLS communication is enabled. Active peers will use the wss:// URL scheme to connect to the listener.
          "},{"location":"passive-peer/#configure-tls-identity-for-listener","title":"Configure TLS Identity for Listener","text":"

          Define the credentials the server will present to the client for authentication. Note that the server must always authenticate itself with the client \u2014 see Authenticating the Listener on Active Peer for how the client deals with this.

          Use URLEndpointListenerConfiguration's tlsIdentity property to configure the TLS Identity used in TLS communication.

          If TLSIdentity is not set, then the listener uses an auto-generated anonymous self-signed identity (unless isTlsDisabled = true). Whilst the client cannot use this to authenticate the server, it will use it to encrypt communication, giving a more secure option than non-TLS communication.

          The auto-generated anonymous self-signed identity is saved in secure storage for future use to obviate the need to re-generate it.

          Note

          Typically, you will configure the listener\u2019s TLS Identity once during the initial launch and re-use it (from secure storage on any subsequent starts.

          Here are some example code snippets showing:

          • Importing a TLS identity \u2014 see Example 6
          • Setting TLS identity to expect self-signed certificate \u2014 see Example 7
          • Setting TLS identity to expect anonymous certificate \u2014 see Example 8

          Example 6. Import Listener\u2019s TLS identity

          TLS identity certificate import APIs are platform-specific.

          AndroidiOS/macOSJVM in androidMain
          config.isTlsDisabled = false\n\nKeyStoreUtils.importEntry(\n    \"PKCS12\",\n    context.assets.open(\"cert.p12\"),\n    \"store-password\".toCharArray(),\n    \"store-alias\",\n    \"key-password\".toCharArray(),\n    \"new-alias\"\n)\n\nconfig.tlsIdentity = TLSIdentity.getIdentity(\"new-alias\")\n
          in appleMain
          config.isTlsDisabled = false\n\nval path = NSBundle.mainBundle.pathForResource(\"cert\", ofType = \"p12\") ?: return\n\nval certData = NSData.dataWithContentsOfFile(path) ?: return\n\nval tlsIdentity = TLSIdentity.importIdentity(\n    data = certData.toByteArray(),\n    password = \"123\".toCharArray(),\n    alias = \"alias\"\n)\n\nconfig.tlsIdentity = tlsIdentity\n
          in jvmMain
          config.isTlsDisabled = false\n\nval keyStore = KeyStore.getInstance(\"PKCS12\")\nFiles.newInputStream(Path(\"cert.p12\")).use { keyStream ->\n    keyStore.load(\n        keyStream,\n        \"keystore-password\".toCharArray()\n    )\n}\n\nconfig.tlsIdentity = TLSIdentity.getIdentity(keyStore, \"alias\", \"keyPass\".toCharArray())\n
          1. Ensure TLS is used
          2. Get key and certificate data
          3. Use the retrieved data to create and store the TLS identity
          4. Set this identity as the one presented in response to the client\u2019s prompt

          Example 7. Create Self-Signed Cert

          CommonJVM in commonMain
          config.isTlsDisabled = false\n\nval attrs = mapOf(\n    TLSIdentity.CERT_ATTRIBUTE_COMMON_NAME to \"Couchbase Demo\",\n    TLSIdentity.CERT_ATTRIBUTE_ORGANIZATION to \"Couchbase\",\n    TLSIdentity.CERT_ATTRIBUTE_ORGANIZATION_UNIT to \"Mobile\",\n    TLSIdentity.CERT_ATTRIBUTE_EMAIL_ADDRESS to \"noreply@couchbase.com\"\n)\n\nval tlsIdentity = TLSIdentity.createIdentity(\n    true,\n    attrs,\n    Clock.System.now() + 1.days,\n    \"cert-alias\"\n)\n\nconfig.tlsIdentity = tlsIdentity\n
          in jvmMain
          // On the JVM platform, before calling\n// common TLSIdentity.createIdentity() or getIdentity()\n// load a KeyStore to use\nval keyStore = KeyStore.getInstance(\"PKCS12\")\nkeyStore.load(null, null)\nTLSIdentity.useKeyStore(keyStore)\n
          1. Ensure TLS is used.
          2. Map the required certificate attributes.
          3. Create the required TLS identity using the attributes. Add to secure storage as 'cert-alias'.
          4. Configure the server to present the defined identity credentials when prompted.

          Example 8. Use Anonymous Self-Signed Certificate

          This example uses an anonymous self-signed certificate. Generated certificates are held in secure storage.

          config.isTlsDisabled = false\n\n// Use an Anonymous Self-Signed Cert\nconfig.tlsIdentity = null\n
          1. Ensure TLS is used. This is the default setting.
          2. Authenticate using an anonymous self-signed certificate. This is the default setting.
          "},{"location":"passive-peer/#authenticating-the-client","title":"Authenticating the Client","text":"

          In this section Use Basic Authentication | Using Client Certificate Authentication | Delete Entry | The Impact of TLS Settings

          Define how the server (listener) will authenticate the client as one it is prepared to interact with.

          Whilst client authentication is optional, Couchbase Lite provides the necessary tools to implement it. Use the URLEndpointListenerConfiguration class\u2019s authenticator property to specify how the client-supplied credentials are to be authenticated.

          Valid options are:

          • No authentication \u2014 If you do not define a ListenerAuthenticator then all clients are accepted.
          • Basic Authentication \u2014 uses the ListenerPasswordAuthenticator to authenticate the client using the client-supplied username and password (from the http authentication header).
          • ListenerCertificateAuthenticator \u2014 which authenticates the client using a client supplied chain of one or more certificates. You should initialize the authenticator using one of the following constructors:
            • A list of one or more root certificates \u2014 the client supplied certificate must end at a certificate in this list if it is to be authenticated
            • A block of code that assumes total responsibility for authentication \u2014 it must return a boolean response (true for an authenticated client, or false for a failed authentication).
          "},{"location":"passive-peer/#use-basic-authentication","title":"Use Basic Authentication","text":"

          Define how to authenticate client-supplied username and password credentials. To use client-supplied certificates instead \u2014 see Using Client Certificate Authentication

          Example 9. Password authentication

          config.authenticator = ListenerPasswordAuthenticator { username, password ->\n    username == validUser && password.concatToString() == validPassword\n}\n

          Where username/password are the client-supplied values (from the http-authentication header) and validUser/validPassword are the values acceptable to the server.

          "},{"location":"passive-peer/#using-client-certificate-authentication","title":"Using Client Certificate Authentication","text":"

          Define how the server will authenticate client-supplied certificates.

          There are two ways to authenticate a client:

          • A chain of one or more certificates that ends at a certificate in the list of certificates supplied to the constructor for ListenerCertificateAuthenticator \u2014 see Example 10
          • Application logic: This method assumes complete responsibility for verifying and authenticating the client \u2014 see Example 11 If the parameter supplied to the constructor for ListenerCertificateAuthenticator is of type ListenerCertificateAuthenticatorDelegate, all other forms of authentication are bypassed. The client response to the certificate request is passed to the method supplied as the constructor parameter. The logic should take the form of a function or lambda.

          Example 10. Set Certificate Authorization

          Configure the server (listener) to authenticate the client against a list of one or more certificates provided by the server to the ListenerCertificateAuthenticator.

          // Configure the client authenticator\n// to validate using ROOT CA\n// validId.certs is a list containing a client cert to accept\n// and any other certs needed to complete a chain between\n// the client cert and a CA\nval validId = TLSIdentity.getIdentity(\"Our Corporate Id\")\n    ?: throw IllegalStateException(\"Cannot find corporate id\")\n\n// accept only clients signed by the corp cert\nval listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        // get the identity \n        collections = collections,\n        identity = validId,\n        authenticator = ListenerCertificateAuthenticator(validId.certs)\n    )\n)\n
          1. Get the identity data to authenticate against. This can be, for example, from a resource file provided with the app, or an identity previously saved in secure storage.
          2. Configure the authenticator to authenticate the client supplied certificate(s) using these root certs. A valid client will provide one or more certificates that match a certificate in this list.
          3. Add the authenticator to the listener configuration.

          Example 11. Application Logic

          Configure the server (listener) to authenticate the client using user-supplied logic.

          // Configure authentication using application logic\nval corpId = TLSIdentity.getIdentity(\"OurCorp\")\n    ?: throw IllegalStateException(\"Cannot find corporate id\")\n\nconfig.tlsIdentity = corpId\n\nconfig.authenticator = ListenerCertificateAuthenticator { certs ->\n    // supply logic that returns boolean\n    // true for authenticate, false if not\n    // For instance:\n    certs[0].contentEquals(corpId.certs[0])\n}\n
          1. Get the identity data to authenticate against. This can be, for example, from a resource file provided with the app, or an identity previously saved in secure storage.
          2. Configure the authenticator to pass the root certificates to a user supplied code block. This code assumes complete responsibility for authenticating the client supplied certificate(s). It must return a boolean value; with true denoting the client supplied certificate authentic.
          3. Add the authenticator to the listener configuration.
          "},{"location":"passive-peer/#delete-entry","title":"Delete Entry","text":"

          You can remove unwanted TLS identities from secure storage using the convenience API.

          Example 12. Deleting TLS Identities

          TLSIdentity.deleteIdentity(\"cert-alias\")\n
          "},{"location":"passive-peer/#the-impact-of-tls-settings","title":"The Impact of TLS Settings","text":"

          The table in this section shows the expected system behavior (in regards to security) depending on the TLS configuration settings deployed.

          Table 1. Expected system behavior

          isTlsDisabled tlsIdentity (corresponding to server) Expected system behavior true Ignored TLS is disabled; all communication is plain text. false Set to null
          • The system will auto generate an anonymous self-signed cert.
          • Active Peers (clients) should be configured to accept self-signed certificates.
          • Communication is encrypted.
          false Set to server identity generated from a self- or CA-signed certificate
          • On first use \u2014 Bring your own certificate and private key; for example, using the TLSIdentity class\u2019s createIdentity() method to add it to the secure storage.
          • Each time \u2014 Use the server identity from the certificate stored in the secure storage; for example, using the TLSIdentity class\u2019s getIdentity() method with the alias you want to retrieve.
          • System will use the configured identity.
          • Active Peers will validate the server certificate corresponding to the TLSIdentity (as long as they are configured to not skip validation \u2014 see TLS Security).
          "},{"location":"passive-peer/#start-listener","title":"Start Listener","text":"

          Once you have completed the listener\u2019s configuration settings you can initialize the listener instance and start it running \u2014 see Example 13.

          Example 13. Initialize and start listener

          // Initialize the listener\nval listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        collections = collections,\n        port = 55990,\n        networkInterface = \"wlan0\",\n\n        enableDeltaSync = false,\n\n        // Configure server security\n        disableTls = false,\n\n        // Use an Anonymous Self-Signed Cert\n        identity = null,\n\n        // Configure Client Security using an Authenticator\n        // For example, Basic Authentication\n        authenticator = ListenerPasswordAuthenticator { usr, pwd ->\n            (usr === validUser) && (pwd.concatToString() == validPass)\n        }\n    )\n)\n\n// Start the listener\nlistener.start()\n
          "},{"location":"passive-peer/#monitor-listener","title":"Monitor Listener","text":"

          Use the listener\u2019s status property to get counts of total and active connections \u2014 see Example 14.

          You should note that these counts can be extremely volatile. So, the actual number of active connections may have changed, by the time the ConnectionStatus class returns a result.

          Example 14. Get connection counts

          val connectionCount = listener.status?.connectionCount\nval activeConnectionCount = listener.status?.activeConnectionCount\n
          "},{"location":"passive-peer/#stop-listener","title":"Stop Listener","text":"

          It is best practice to check the status of the listener\u2019s connections and stop only when you have confirmed that there are no active connections \u2014 see Example 15.

          Example 15. Stop listener using stop method

          listener.stop()\n

          Note

          Closing the database will also close the listener.

          "},{"location":"peer-to-peer-sync/","title":"Peer-to-Peer Sync","text":"

          Couchbase Lite\u2019s Peer-to-Peer Synchronization enables edge devices to synchronize securely without consuming centralized cloud-server resources

          "},{"location":"peer-to-peer-sync/#introduction","title":"Introduction","text":"

          This is an Enterprise Edition feature.

          Couchbase Lite\u2019s Peer-to-Peer synchronization solution offers secure storage and bidirectional data synchronization between edge devices without needing a centralized cloud-based control point.

          Couchbase Lite\u2019s Peer-to-Peer data synchronization provides:

          • Instant WebSocket-based listener for use in Peer-to-Peer applications communicating over IP-based networks
          • Simple application development, enabling sync with a short amount of code
          • Optimized network bandwidth usage and reduced data transfer costs with Delta Sync support
          • Securely sync data with built-in support for Transport Layer Security (TLS) encryption and authentication support
          • Document management. Reducing conflicts in concurrent writes with built-in conflict management support
          • Built-in network resiliency
          "},{"location":"peer-to-peer-sync/#overview","title":"Overview","text":"

          Peer-to-Peer synchronization requires one Peer to act as the Listener to the other Peer\u2019s replicator.

          Peer-to-Peer synchronization requires one Peer to act as the Listener to the other Peer\u2019s replicator. Therefore, to use Peer-to-Peer synchronization in your application, you must configure one Peer to act as a Listener using the Couchbase Listener API, the most important of which include URLEndpointListener and URLEndpointListenerConfiguration.

          Example 1. Simple workflow

          1. Configure the listener (passive peer, or server)
          2. Initialize the listener, which listens for incoming WebSocket connections (on a user-defined, or auto-selected, port)
          3. Configure a replicator (active peer, or client)
          4. Use some form of discovery phase, perhaps with a zero-config protocol such as Network Service Discovery for Android or Bonjour for iOS, or use known URL endpoints, to identify a listener
          5. Point the replicator at the listener
          6. Initialize the replicator
          7. Replicator and listener engage in the configured security protocol exchanges to confirm connection
          8. If connection is confirmed then replication will commence, synchronizing the two data stores

          Here you can see configuration involves a Passive Peer and an Active Peer and a user-friendly listener configuration in Basic Setup.

          You can also learn how to implement Peer-to-Peer synchronization by referring to our tutorial \u2014 see Getting Started with Peer-to-Peer Synchronization.

          "},{"location":"peer-to-peer-sync/#features","title":"Features","text":"

          Couchbase Lite\u2019s Peer-to-Peer synchronization solution provides support for cross-platform synchronization, for example, between Android and iOS devices.

          Each listener instance serves one Couchbase Lite database. However, there is no hard limit on the number of listener instances you can associate with a database.

          Having a listener on a database still allows you to open replications to the other clients. For example, a listener can actively begin replicating to other listeners while listening for connections. These replications can be for the same or a different database.

          The listener will automatically select a port to use or a user-specified port. It will also listen on all available networks, unless you specify a specific network.

          "},{"location":"peer-to-peer-sync/#security","title":"Security","text":"

          Couchbase Lite\u2019s Peer-to-Peer synchronization supports encryption and authentication over TLS with multiple modes, including:

          • No encryption, for example, clear text.
          • CA cert
          • Self-signed cert
          • Anonymous self-signed \u2014 an auto-generated anonymous TLS identity is generated if no identity is specified. This TLS identity provides encryption but not authentication. Any self-signed certificates generated by the convenience API are stored in secure storage.

          The replicator (client) can handle certificates pinned by the listener for authentication purposes.

          Support is also provided for basic authentication using username and password credentials. Whilst this can be used in clear text mode, developers are strongly advised to use TLS encryption.

          For testing and development purposes, support is provided for the client (active, replicator) to skip verification of self-signed certificates; this mode should not be used in production.

          "},{"location":"peer-to-peer-sync/#error-handling","title":"Error Handling","text":"

          When a listener is stopped, then all connected replicators are notified by a WebSocket error. Your application should distinguish between transient and permanent connectivity errors.

          "},{"location":"peer-to-peer-sync/#passive-peers","title":"Passive peers","text":"

          A Passive Peer losing connectivity with an Active Peer will clean up any associated endpoint connections to that peer. The Active Peer may attempt to reconnect to the Passive Peer.

          "},{"location":"peer-to-peer-sync/#active-peers","title":"Active peers","text":"

          An Active Peer permanently losing connectivity with a Passive Peer will cease replicating.

          An Active Peer temporarily losing connectivity with a passive Peer will use exponential backoff functionality to attempt reconnection.

          "},{"location":"peer-to-peer-sync/#delta-sync","title":"Delta Sync","text":"

          Optional delta-sync support is provided but is inactive by default.

          Delta-sync can be enabled on a per-replication basis provided that the databases involved are also configured to permit it. Statistics on delta-sync usage are available, including the total number of revisions sent as deltas.

          "},{"location":"peer-to-peer-sync/#conflict-resolution","title":"Conflict Resolution","text":"

          Conflict resolution for Peer-to-Peer synchronization works in the same way as it does for Sync Gateway replication, with both custom and automatic resolution available.

          "},{"location":"peer-to-peer-sync/#basic-setup","title":"Basic Setup","text":"

          You can configure a Peer-to-Peer synchronization with just a short amount of code as shown here in Example 2 and Example 3.

          Example 2. Simple Listener

          This simple listener configuration will give you a listener ready to participate in an encrypted synchronization with a replicator providing a valid username and password.

          val listener = URLEndpointListener(\n    URLEndpointListenerConfigurationFactory.newConfig(\n        collections = db.collections,\n        authenticator = ListenerPasswordAuthenticator { user, pwd ->\n            (user == \"daniel\") && (pwd.concatToString() == \"123\")\n        }\n    )\n)\nlistener.start()\nthis.listener = listener\n
          1. Initialize the listener configuration
          2. Configure the client authenticator to require basic authentication
          3. Initialize the listener
          4. Start the listener

          Example 3. Simple Replicator

          This simple replicator configuration will give you an encrypted, bi-directional Peer-to-Peer synchronization with automatic conflict resolution.

          val listenerEndpoint = URLEndpoint(\"wss://10.0.2.2:4984/db\") \nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        collections = mapOf(collections to null),\n        target = listenerEndpoint,\n        authenticator = BasicAuthenticator(\"valid.user\", \"valid.password.string\".toCharArray()),\n        acceptOnlySelfSignedServerCertificate = true\n    )\n)\nrepl.start() \nthis.replicator = repl\n
          1. Get the listener\u2019s endpoint. Here we use a known URL, but it could be a URL established dynamically in a discovery phase.
          2. Initialize the replicator configuration with the collections of the database to be synchronized and the listener it is to synchronize with.
          3. Configure the replicator to expect a self-signed certificate from the listener.
          4. Configure the replicator to present basic authentication credentials if the listener prompts for them (client authentication is optional).
          5. Initialize the replicator.
          6. Start the replicator.
          "},{"location":"peer-to-peer-sync/#api-highlights","title":"API Highlights","text":""},{"location":"peer-to-peer-sync/#urlendpointlistener","title":"URLEndpointListener","text":"

          The URLEndpointListener is the listener for peer-to-peer synchronization. It acts like a passive replicator, in the same way that Sync Gateway does in a 'standard' replication. On the client side, the listener\u2019s endpoint is used to point the replicator to the listener.

          Core functionalities of the listener are:

          • Users can initialize the class using a URLEndpointListenerConfiguration object.
          • The listener can be started, or can be stopped.
          • Once the listener is started, a total number of connections or active connections can be checked.
          "},{"location":"peer-to-peer-sync/#urlendpointlistenerconfiguration","title":"URLEndpointListenerConfiguration","text":"

          Use URLEndpointListenerConfiguration to create a configuration object you can then use to initialize the listener.

          port

          This is the port that the listener will listen to.

          If the port is zero, the listener will auto-assign an available port to listen on.

          Default value is zero. When the listener is not started, the port zero.

          networkInterface

          Use this to select a specific Network Interface to use, in the form of the IP Address or network interface name.

          If the network interface is specified, only that interface wil be used.

          If the network interface is not specified, all available network interfaces will be used.

          The value is null if the listener is not started.

          isTlsDisabled

          You can use URLEndpointListenerConfiguration's isTlsDisabled property to disable TLS communication if necessary.

          The isTlsDisabled setting must be false when Client Cert Authentication is required.

          Basic Authentication can be used with, or without, TLS.

          isTlsDisabled works in conjunction with TLSIdentity, to enable developers to define the key and certificate to be used.

          • If isTlsDisabled is true \u2014 TLS communication is disabled and tlsIdentity is ignored. Active peers will use the ws:// URL scheme used to connect to the listener.
          • If isTlsDisabled is false or not specified \u2014 TLS communication is enabled. Active peers will use the wss:// URL scheme to connect to the listener.

          tlsIdentity

          Use URLEndpointListenerConfiguration's tlsIdentity property to configure the TLS Identity used in TLS communication.

          If TLSIdentity is not set, then the listener uses an auto-generated anonymous self-signed identity (unless isTlsDisabled = true). Whilst the client cannot use this to authenticate the server, it will use it to encrypt communication, giving a more secure option than non-TLS communication.

          The auto-generated anonymous self-signed identity is saved in secure storage for future use to obviate the need to re-generate it.

          When the listener is not started, the identity is null. When TLS is disabled, the identity is always null.

          authenticator

          Use this to specify the authenticator the listener uses to authenticate the client\u2019s connection request. This should be set to one of the following:

          • ListenerPasswordAuthenticator
          • ListenerCertificateAuthenticator
          • null \u2014 there is no authentication

          isReadOnly

          Use this to allow only pull replication. The default value is false.

          isDeltaSyncEnabled

          The option to enable Delta Sync and replicate only changed data also depends on the delta sync settings at database level. The default value is false.

          "},{"location":"peer-to-peer-sync/#security_1","title":"Security","text":""},{"location":"peer-to-peer-sync/#authentication","title":"Authentication","text":"

          Peer-to-Peer sync supports Basic Authentication and TLS Authentication. For anything other than test deployments, we strongly encourage the use of TLS. In fact, Peer-to-Peer sync using URLEndpointListener is encrypted using TLS by default.

          The authentication mechanism is defined at the endpoint level, meaning that it is independent of the database being replicated. For example, you may use basic authentication on one instance and TLS authentication on another when replicating multiple database instances.

          Note

          The minimum supported version of TLS is TLS 1.2.

          Peer-to-Peer synchronization using URLEndpointListener supports certificate based authentication of the server and-or listener:

          • Replicator certificates can be: self-signed, from trusted CA, or anonymous (system generated).
          • Listeners certificates may be: self-signed or trusted CA signed. Where a TLS certificate is not explicitly specified for the listener, the listener implementation will generate anonymous certificate to use for encryption.
          • The URLEndpointListener supports the ability to opt out of TLS encryption communication. Active clients replicating with a URLEndpointListener have the option to skip validation of server certificates when the listener is configured with self-signed certificates. This option is ignored when dealing with CA certificates.
          "},{"location":"peer-to-peer-sync/#using-secure-storage","title":"Using Secure Storage","text":"

          TLS and its associated keys and certificates might require using secure storage to minimize the chances of a security breach. The implementation of this storage differs from platform to platform. Table 1 summarizes the secure storage used to store keys and certificates for each platform.

          Table 1. Secure storage details

          Platform Key & Certificate Storage Notes Reference Android Android System KeyStore
          • Android KeyStore was introduced from Android API 18.
          • Android KeyStore security has evolved over time to provide more secure support. Please check this document for more info.
          link MacOS/iOS KeyChain Use kSecAttrLabel of the SecCertificate to store the TLSIdentity\u2019s label link Java User Specified KeyStore
          • The KeyStore represents a storage facility for cryptographic keys and certificates. It\u2019s users\u2019 choice to decide whether to persist the KeyStore or not.
          • The supported KeyStore types are PKCS12 (Default from Java 9) and JKS (Default on Java 8 and below).
          link"},{"location":"platforms/","title":"Supported Platforms","text":"

          Kotbase provides a common Kotlin Multiplatform API for Couchbase Lite, allowing you to develop a single Kotlin shared library, which compiles to native binaries that can be consumed by native apps on each of the supported platforms: Android, JVM, iOS, macOS, Linux, and Windows.

          "},{"location":"platforms/#android-jvm","title":"Android + JVM","text":"

          Kotbase implements support for JVM desktop and Android apps via the Couchbase Lite Java and Android SDKs. Kotbase's API mirrors the Java SDK as much as feasible, which allows for smooth migration for existing Kotlin code currently utilizing either the Java or Android KTX SDKs. See Differences from Couchbase Lite Java SDK for details about where the APIs differ.

          Kotbase will pull in the correct Couchbase Lite Java dependencies via Gradle.

          "},{"location":"platforms/#minification","title":"Minification","text":"

          An application that enables ProGuard minification must ensure that certain pieces of Couchbase Lite library code are not changed.

          Near-minimal rule set that retains the needed code proguard-rules.pro
          -keep class com.couchbase.lite.ConnectionStatus { <init>(...); }\n-keep class com.couchbase.lite.LiteCoreException { static <methods>; }\n-keep class com.couchbase.lite.internal.replicator.CBLTrustManager {\n    public java.util.List checkServerTrusted(java.security.cert.X509Certificate[], java.lang.String, java.lang.String);\n}\n-keep class com.couchbase.lite.internal.ReplicationCollection {\n    static <methods>;\n    <fields>;\n}\n-keep class com.couchbase.lite.internal.core.C4* {\n    static <methods>;\n    <fields>;\n    <init>(...);\n}\n
          "},{"location":"platforms/#android","title":"Android","text":"API x86 x64 ARM32 ARM64 22+"},{"location":"platforms/#jvm","title":"JVM","text":"JDK Linux x64 macOS x64 Windows x64 8+"},{"location":"platforms/#jvm-on-linux","title":"JVM on Linux","text":"

          Targeting JVM running on Linux requires a specific version of the libicu dependency. (You will see an error such as libLiteCore.so: libicuuc.so.71: cannot open shared object file: No such file or directory indicating the expected version.) If the required version isn't available from your distribution's package manager, you can download it from GitHub.

          "},{"location":"platforms/#ios-macos","title":"iOS + macOS","text":"

          Kotbase supports native iOS and macOS apps via the Couchbase Lite Objective-C SDK. Developers with experience using Couchbase Lite in Swift should find Kotbase's API in Kotlin familiar.

          Binaries need to link with the correct version of the CouchbaseLite XCFramework, which can be downloaded here or added via Carthage or CocoaPods. The version should match the major and minor version of Kotbase, e.g. CouchbaseLite 3.1.x for Kotbase 3.1.3-1.1.0.

          The Kotlin CocoaPods Gradle plugin can also be used to generate a Podspec for your project that includes the CouchbaseLite dependency. Use linkOnly = true to link the dependency without generating Kotlin Objective-C interop:

          CocoaPods plugin Enterprise EditionCommunity Edition build.gradle.kts
          plugins {\n    kotlin(\"multiplatform\")\n    kotlin(\"native.cocoapods\")\n}\n\nkotlin {\n    cocoapods {\n        ...\n        pod(\"CouchbaseLite-Enterprise\", version = \"3.1.4\", linkOnly = true)\n    }\n}\n
          build.gradle.kts
          plugins {\n    kotlin(\"multiplatform\")\n    kotlin(\"native.cocoapods\")\n}\n\nkotlin {\n    cocoapods {\n        ...\n        pod(\"CouchbaseLite\", version = \"3.1.4\", linkOnly = true)\n    }\n}\n
          "},{"location":"platforms/#ios","title":"iOS","text":"Version x64 ARM64 10+"},{"location":"platforms/#macos","title":"macOS","text":"Version x64 ARM64 10.14+"},{"location":"platforms/#linux-windows","title":"Linux + Windows","text":"

          Experimental support for Linux and Windows is provided via the Couchbase Lite C SDK. Core functionality should be mostly stable, however these platforms have not been tested in production. There are some tests that have slightly different behavior in a few edge cases and others that are failing that need further debugging. See comments in tests marked @IgnoreLinuxMingw for details.

          There are a few Enterprise Edition features that are not implemented in the Couchbase Lite C SDK. Kotbase will throw an UnsupportedOperationException if these APIs are called from these platforms.

          Binaries need to link with the correct version of the native platform libcblite binary, which can be downloaded here or here. The version should match the major and minor version of Kotbase, e.g. libcblite 3.1.x for Kotbase 3.1.3-1.1.0.

          "},{"location":"platforms/#linux","title":"Linux","text":"

          Linux also requires libz, libicu, and libpthread, which may or may not be installed on your system.

          Targeting Linux requires a specific version of the libicu dependency. (You will see an error such as libLiteCore.so: libicuuc.so.71: cannot open shared object file: No such file or directory indicating the expected version.) If the required version isn't available from your distribution's package manager, you can download it from GitHub.

          Distro Version x64 ARM64 Debian 9+ Raspberry Pi OS 10+ Ubuntu 20.04+"},{"location":"platforms/#windows","title":"Windows","text":"Version x64 10+"},{"location":"prebuilt-database/","title":"Pre-built Database","text":"

          How to include a snapshot of a pre-built database in your Couchbase Lite app package to shorten initial sync time and reduce bandwidth use

          "},{"location":"prebuilt-database/#overview","title":"Overview","text":"

          Couchbase Lite supports pre-built databases. You can pre-load your app with data instead of syncing it from Sync Gateway during startup to minimize consumer wait time (arising from data setup) on initial install and launch of the application.

          Avoiding an initial bulk sync reduces startup time and network transfer costs.

          It is typically more efficient to download bulk data using the http/ftp stream employed during the application installation than to install a smaller application bundle and then use a replicator to pull in the bulk data.

          Pre-loaded data is typically public/shared, non-user-specific data that is static. Even if the data is not static, you can still benefit from preloading it and only syncing the changed documents on startup.

          The initial sync of any pre-built database pulls in any content changes on the server that occurred after its incorporation into the app, updating the database.

          To use a prebuilt database:

          1. Create a new Couchbase Lite database with the required dataset \u2014 see Creating Pre-built Database
          2. Incorporate the pre-built database with your app bundle as an asset/resource \u2014 see Bundle a Database with an Application
          3. Adjust the start-up logic of your app to check for the presence of the required database. If the database doesn\u2019t already exist, create one using the bundled pre-built database. Initiate a sync to update the data \u2014 see Using Pre-built Database on App Launch
          "},{"location":"prebuilt-database/#creating-pre-built-database","title":"Creating Pre-built Database","text":"

          These steps should form part of your build and release process:

          1. Create a fresh Couchbase Lite database (every time)

            Important

            Always start with a fresh database for each app version; this ensures there are no checkpoint issues

            Otherwise: You will invalidate the cached checkpoint in the packaged database, and instead reuse the same database in your build process (for subsequent app versions).

          2. Pull the data from Sync Gateway into the new Couchbase Lite database

            Important

            Ensure the replication used to populate Couchbase Lite database uses the exact same remote URL and replication config parameters (channels and filters) as those your app will use when it is running.

            Otherwise: \u2026 there will be a checkpoint mismatch and the app will attempt to pull the data down again

            Don\u2019t, for instance, create a pre-built database against a staging Sync Gateway server and use it within a production app that syncs against a production Sync Gateway.

            You can use the cblite tool (cblite cp) for this \u2014 see cblite cp (export, import, push, pull) | cblite on GitHub

            Alternatively \u2026

            • You can write a simple CBL app to just initiate the required pull sync \u2014 see Remote Sync Gateway
            • A third party community Java app is available. It provides a UI to create a local Couchbase Lite database and pull data from a Sync Gateway database \u2014 see CouchbaseLite Tester.
          3. Create the same indexes the app will use (wait for the replication to finish before doing this).

          "},{"location":"prebuilt-database/#bundle-a-database-with-an-application","title":"Bundle a Database with an Application","text":"

          Copy the database into your app package.

          Put it in an appropriate place (for example, an assets or resource folder).

          Where the platform permits you can zip the database.

          Alternatively \u2026 rather than bundling the database within the app, the app could pull the database down from a CDN server on launch.

          "},{"location":"prebuilt-database/#database-encryption","title":"Database Encryption","text":"

          This is an Enterprise Edition feature.

          If you are using an encrypted database, Database.copy() does not change the encryption key. The encryption key specified in the config when opening the database is the encryption key used for both the original database and copied database.

          If you copied an un-encrypted database and want to apply encryption to the copy, or if you want to change (or remove) the encryption key applied to the copy:

          1. Provide the original encryption-key (if any) in the database copy\u2019s configuration using DatabaseConfiguration.setEncryptionKey().
          2. Open the database copy.
          3. Use Database.changeEncryptionKey() on the database copy to set the required encryption key. NOTE: To remove encryption on the copy, provide a null encryption-key.
          "},{"location":"prebuilt-database/#using-pre-built-database-on-app-launch","title":"Using Pre-built Database on App Launch","text":"

          During the application start-up logic, check if database exists in the required location, and if not:

          1. Locate the pre-packaged database (for example, in the assets or other resource folder).
          2. Copy the pre-packaged database to the required location.

            Use the API\u2019s Database.copy() method \u2014 see: Example 1; this ensures that a UUID is generated for each copy.

            Important

            Do not copy the database using any other method

            Otherwise: Each copy of the app will invalidate the other apps' checkpoints because a new UUID was not generated.

          3. Open the database; you can now start querying the data and using it.

          4. Start a pull replication, to sync any changes.

            The replicator uses the pre-built database\u2019s checkpoint as the timestamp to sync from; only documents changed since then are synced.

            Important

            If you used cblite to pull the data without including a port number with the URL and are replicating in a Java or iOS (swift/ObjC) app \u2014 you must include the port number in the URL provided to the replication (port 443 for wss:// or 80 for ws://).

            Otherwise: You will get a checkpoint mismatch. This is caused by a URL discrepancy, which arises because cblite automatically adds the default port number when none is specified, but the Java and iOS (swift/ObjC) replicators DO NOT.

            Note

            Start your normal application logic immediately, unless it is essential to have the absolute up-to-date data set to begin. That way the user is not kept hanging around watching a progress indicator. They can begin interacting with your app whilst any out-of-data data is being updated.

          Example 1. Copy database using API

          Note

          Getting the path to a database and package resources is platform-specific.

          You may need to extract the database from your package resources to a temporary directory and then copy it, using Database.copy().

          if (Database.exists(\"travel-sample\") {\n    return\n}\nval pathToPrebuiltDb = getPrebuiltDbPathFromResources()\nDatabase.copy(\n    pathToPrebuiltDb,\n    \"travel-sample\",\n    DatabaseConfiguration()\n)\n
          "},{"location":"query-builder/","title":"QueryBuilder","text":"

          How to use QueryBuilder to build effective queries with Kotbase

          Note

          The examples used here are based on the Travel Sample app and data introduced in the Couchbase Mobile Workshop tutorial.

          "},{"location":"query-builder/#introduction","title":"Introduction","text":"

          Kotbase provides two ways to build and run database queries; the QueryBuilder API described in this topic and SQL++ for Mobile.

          Database queries defined with the QueryBuilder API use the query statement format shown in Example 1. The structure and semantics of the query format are based on Couchbase\u2019s SQL++ query language.

          Example 1. Query Format

          SELECT ____\nFROM 'data-source'\nWHERE ____,\nJOIN ____\nGROUP BY ____\nORDER BY ____\n

          Query Components

          Component Description SELECT statement The document properties that will be returned in the result set FROM The data source to query the documents from \u2014 the collection of the database WHERE statement The query criteriaThe SELECTed properties of documents matching this criteria will be returned in the result set JOIN statement The criteria for joining multiple documents GROUP BY statement The criteria used to group returned items in the result set ORDER BY statement The criteria used to order the items in the result set

          Tip

          We recommend working through the query section of the Couchbase Mobile Workshop tutorial as a good way to build your skills in this area.

          Tip

          The examples in the documentation use the official Couchbase Lite query builder APIs, available in the Kotbase core artifacts. Many queries can take advantage of the concise infix function query builder APIs available in the Kotbase KTX extensions.

          "},{"location":"query-builder/#select-statement","title":"SELECT statement","text":"

          In this section Return All Properties | Return Selected Properties

          Related Result Sets

          Use the SELECT statement to specify which properties you want to return from the queried documents. You can opt to retrieve entire documents, or just the specific properties you need.

          "},{"location":"query-builder/#return-all-properties","title":"Return All Properties","text":"

          Use the SelectResult.all() method to return all the properties of selected documents \u2014 see Example 2.

          Example 2. Using SELECT to Retrieve All Properties

          This query shows how to retrieve all properties from all documents in a collection.

          val queryAll = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n

          The Query.execute() statement returns the results in a dictionary, where the key is the database name \u2014 see Example 3.

          Example 3. ResultSet Format from SelectResult.all()

          [\n  {\n    \"travel-sample\": { // The result for the first document matching the query criteria.\n      \"callsign\": \"MILE-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"Q5\",\n      \"icao\": \"MLA\",\n      \"id\": 10,\n      \"name\": \"40-Mile Air\",\n      \"type\": \"airline\"\n    }\n  },\n  {\n    \"travel-sample\": { // The result for the next document matching the query criteria.\n      \"callsign\": \"ALASKAN-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"AA\",\n      \"icao\": \"AAA\",\n      \"id\": 10,\n      \"name\": \"Alaskan Airways\",\n      \"type\": \"airline\"\n    }\n  }\n]\n

          See Result Sets for more on processing query results.

          "},{"location":"query-builder/#return-selected-properties","title":"Return Selected Properties","text":"

          To access only specific properties, specify a comma-separated list of SelectResult expressions, one for each property, in the select statement of your query \u2014 see Example 4.

          Example 4. Using SELECT to Retrieve Specific Properties

          In this query we retrieve and then print the _id, type, and name properties of each document.

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"name\"),\n        SelectResult.property(\"type\")\n    )\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n    .orderBy(Ordering.expression(Meta.id))\n\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"hotel id -> ${it.getString(\"id\")}\")\n        println(\"hotel name -> ${it.getString(\"name\")}\")\n    }\n}\n

          The Query.execute() statement returns one or more key-value pairs, one for each SelectResult expression, with the property-name as the key \u2014 see Example 5.

          Example 5. Select Result Format

          [\n  { // The result for the first document matching the query criteria.\n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { // The result for the next document matching the query criteria.\n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\"\n  }\n]\n

          See Result Sets for more on processing query results.

          "},{"location":"query-builder/#where-statement","title":"WHERE statement","text":"

          In this section Comparison Operators | Collection Operators | Like Operator | Regex Operator | Deleted Document

          Like SQL, you can use the WHERE statement to choose which documents are returned by your query. The where() statement takes in an Expression. You can chain any number of Expressions in order to implement sophisticated filtering capabilities.

          "},{"location":"query-builder/#comparison-operators","title":"Comparison Operators","text":"

          The Expression Comparators can be used in the WHERE statement to specify on which property to match documents. In the example below, we use the equalTo operator to query documents where the type property equals \"hotel\".

          [\n  { \n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { \n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\"\n  }\n]\n

          Example 6. Using Where

          val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n    .limit(Expression.intValue(10))\n\nquery.execute().use { rs ->\n    rs.forEach { result ->\n        result.getDictionary(\"myDatabase\")?.let {\n            println(\"name -> ${it.getString(\"name\")}\")\n            println(\"type -> ${it.getString(\"type\")}\")\n        }\n    }\n}\n
          "},{"location":"query-builder/#collection-operators","title":"Collection Operators","text":"

          ArrayFunction Collection Operators are useful to check if a given value is present in an array.

          "},{"location":"query-builder/#contains-operator","title":"CONTAINS Operator","text":"

          The following example uses the ArrayFunction to find documents where the public_likes array property contains a value equal to \"Armani Langworth\".

          {\n    \"_id\": \"hotel123\",\n    \"name\": \"Apple Droid\",\n    \"public_likes\": [\"Armani Langworth\", \"Elfrieda Gutkowski\", \"Maureen Ruecker\"]\n}\n

          Example 7. Using the CONTAINS operator

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"name\"),\n        SelectResult.property(\"public_likes\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"hotel\"))\n            .and(\n                ArrayFunction.contains(\n                    Expression.property(\"public_likes\"),\n                    Expression.string(\"Armani Langworth\")\n                )\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"public_likes -> ${it.getArray(\"public_likes\")?.toList()}\")\n    }\n}\n
          "},{"location":"query-builder/#in-operator","title":"IN Operator","text":"

          The IN operator is useful when you need to explicitly list out the values to test against. The following example looks for documents whose first, last, or username property value equals \"Armani\".

          Example 8. Using the IN operator

          val query = QueryBuilder.select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.string(\"Armani\").`in`(\n            Expression.property(\"first\"),\n            Expression.property(\"last\"),\n            Expression.property(\"username\")\n        )\n    )\n\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"public_likes -> ${it.toMap()}\")\n    }\n}\n
          "},{"location":"query-builder/#like-operator","title":"Like Operator","text":"

          In this section String Matching | Wildcard Match | Wildcard Character Match

          "},{"location":"query-builder/#string-matching","title":"String Matching","text":"

          The like() operator can be used for string matching \u2014 see Example 9.

          Note

          The like operator performs case sensitive matches. To perform case insensitive matching, use Function.lower or Function.upper to ensure all comparators have the same case, thereby removing the case issue.

          This query returns landmark type documents where the name matches the string \"Royal Engineers Museum\", regardless of how it is capitalized (so, it selects \"royal engineers museum\", \"ROYAL ENGINEERS MUSEUM\" and so on).

          Example 9. Like with case-insensitive matching

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .like(Expression.string(\"royal engineers museum\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n

          Note the use of Function.lower() to transform name values to the same case as the literal comparator.

          "},{"location":"query-builder/#wildcard-match","title":"Wildcard Match","text":"

          We can use % sign within a like expression to do a wildcard match against zero or more characters. Using wildcards allows you to have some fuzziness in your search string.

          In Example 10 below, we are looking for documents of type \"landmark\" where the name property matches any string that begins with \"eng\" followed by zero or more characters, the letter \"e\", followed by zero or more characters. Once again, we are using Function.lower() to make the search case-insensitive.

          So the query returns \"landmark\" documents with names such as \"Engineers\", \"engine\", \"english egg\" and \"England Eagle\". Notice that the matches may span word boundaries.

          Example 10. Wildcard Matches

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .like(Expression.string(\"eng%e%\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n
          "},{"location":"query-builder/#wildcard-character-match","title":"Wildcard Character Match","text":"

          We can use an _ sign within a like expression to do a wildcard match against a single character.

          In Example 11 below, we are looking for documents of type \"landmark\" where the name property matches any string that begins with \"eng\" followed by exactly 4 wildcard characters and ending in the letter \"r\". The query returns \"landmark\" type documents with names such as \"Engineer\", \"engineer\" and so on.

          Example 11. Wildcard Character Matching

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .like(Expression.string(\"eng____r\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n
          "},{"location":"query-builder/#regex-operator","title":"Regex Operator","text":"

          Similar to the wildcards in like expressions, regex based pattern matching allow you to introduce an element of fuzziness in your search string \u2014 see the code shown in Example 12.

          Note

          The regex operator is case sensitive, use upper or lower functions to mitigate this if required.

          Example 12. Using Regular Expressions

          This example returns documents with a type of \"landmark\" and a name property that matches any string that begins with \"eng\" and ends in the letter \"e\".

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"landmark\"))\n            .and(\n                Function.lower(Expression.property(\"name\"))\n                    .regex(Expression.string(\"\\\\beng.*r\\\\b\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.getString(\"name\")}\")\n    }\n}\n

          The \\b specifies that the match must occur on word boundaries.

          Tip

          For more on the regex spec used by Couchbase Lite see cplusplus regex reference page.

          "},{"location":"query-builder/#deleted-document","title":"Deleted Document","text":"

          You can query documents that have been deleted (tombstones) as shown in Example 13.

          Example 13. Query to select Deleted Documents

          This example shows how to query deleted documents in the database. It returns is an array of key-value pairs.

          // Query documents that have been deleted\nval query = QueryBuilder\n    .select(SelectResult.expression(Meta.id))\n    .from(DataSource.collection(collection))\n    .where(Meta.deleted)\n
          "},{"location":"query-builder/#join-statement","title":"JOIN statement","text":"

          The JOIN clause enables you to select data from multiple documents that have been linked by criteria specified in the JOIN statement. For example to combine airline details with route details, linked by the airline id \u2014 see Example 14 .

          Example 14. Using JOIN to Combine Document Details

          This example JOINS the document of type \"route\" with documents of type \"airline\" using the document ID (_id) on the airline document and airlineid on the route document.

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Expression.property(\"name\").from(\"airline\")),\n        SelectResult.expression(Expression.property(\"callsign\").from(\"airline\")),\n        SelectResult.expression(Expression.property(\"destinationairport\").from(\"route\")),\n        SelectResult.expression(Expression.property(\"stops\").from(\"route\")),\n        SelectResult.expression(Expression.property(\"airline\").from(\"route\"))\n    )\n    .from(DataSource.collection(airlineCollection).`as`(\"airline\"))\n    .join(\n        Join.join(DataSource.collection(routeCollection).`as`(\"route\"))\n            .on(\n                Meta.id.from(\"airline\")\n                    .equalTo(Expression.property(\"airlineid\").from(\"route\"))\n            )\n    )\n    .where(\n        Expression.property(\"type\").from(\"route\").equalTo(Expression.string(\"route\"))\n            .and(\n                Expression.property(\"type\").from(\"airline\")\n                    .equalTo(Expression.string(\"airline\"))\n            )\n            .and(\n                Expression.property(\"sourceairport\").from(\"route\")\n                    .equalTo(Expression.string(\"RIX\"))\n            )\n    )\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"name -> ${it.toMap()}\")\n    }\n}\n
          "},{"location":"query-builder/#group-by-statement","title":"GROUP BY statement","text":"

          You can perform further processing on the data in your result set before the final projection is generated.

          The following example looks for the number of airports at an altitude of 300 ft or higher and groups the results by country and timezone.

          Data Model for Example
          {\n    \"_id\": \"airport123\",\n    \"type\": \"airport\",\n    \"country\": \"United States\",\n    \"geo\": { \"alt\": 456 },\n    \"tz\": \"America/Anchorage\"\n}\n

          Example 15. Query using GroupBy

          This example shows a query that selects all airports with an altitude above 300ft. The output (a count, $1) is grouped by country, within timezone.

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Function.count(Expression.string(\"*\"))),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"tz\")\n    )\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").equalTo(Expression.string(\"airport\"))\n            .and(Expression.property(\"geo.alt\").greaterThanOrEqualTo(Expression.intValue(300)))\n    )\n    .groupBy(\n        Expression.property(\"country\"), Expression.property(\"tz\")\n    )\n    .orderBy(Ordering.expression(Function.count(Expression.string(\"*\"))).descending())\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\n            \"There are ${it.getInt(\"$1\")} airports on the ${\n                it.getString(\"tz\")\n            } timezone located in ${\n                it.getString(\"country\")\n            } and above 300ft\"\n        )\n    }\n}\n

          The query shown in Example 15 generates the following output:

          There are 138 airports on the Europe/Paris timezone located in France and above 300 ft There are 29 airports on the Europe/London timezone located in United Kingdom and above 300 ft There are 50 airports on the America/Anchorage timezone located in United States and above 300 ft There are 279 airports on the America/Chicago timezone located in United States and above 300 ft There are 123 airports on the America/Denver timezone located in United States and above 300 ft

          "},{"location":"query-builder/#order-by-statement","title":"ORDER BY statement","text":"

          It is possible to sort the results of a query based on a given expression result \u2014 see Example 16.

          Example 16. Query using OrderBy

          This example shows a query that returns documents of type equal to \"hotel\" sorted in ascending order by the value of the title property.

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\")))\n    .orderBy(Ordering.property(\"name\").ascending())\n    .limit(Expression.intValue(10))\n\nquery.execute().use { rs ->\n    rs.forEach {\n        println(\"${it.toMap()}\")\n    }\n}\n

          The query shown in Example 16 generates the following output:

          Aberdyfi Achiltibuie Altrincham Ambleside Annan Ard\u00e8che Armagh Avignon

          "},{"location":"query-builder/#datetime-functions","title":"Date/Time Functions","text":"

          Couchbase Lite documents support a date type that internally stores dates in ISO 8601 with the GMT/UTC timezone.

          Couchbase Lite\u2019s Query Builder API includes four functions for date comparisons.

          Function.stringToMillis(Expression.property(\"date_time\")) The input to this will be a validly formatted ISO 8601 date_time string. The end result will be an expression (with a numeric content) that can be further input into the query builder.

          Function.stringToUTC(Expression.property(\"date_time\")) The input to this will be a validly formatted ISO 8601 date_time string. The end result will be an expression (with string content) that can be further input into the query builder.

          Function.millisToString(Expression.property(\"date_time\")) The input for this is a numeric value representing milliseconds since the Unix epoch. The end result will be an expression (with string content representing the date and time as an ISO 8601 string in the device\u2019s timezone) that can be further input into the query builder.

          Function.millisToUTC(Expression.property(\"date_time\")) The input for this is a numeric value representing milliseconds since the Unix epoch. The end result will be an expression (with string content representing the date and time as a UTC ISO 8601 string) that can be further input into the query builder.

          "},{"location":"query-builder/#result-sets","title":"Result Sets","text":"

          In this section Processing | Select All Properties | Select Specific Properties | Select Document ID Only | Select Count-only | Handling Pagination

          "},{"location":"query-builder/#processing","title":"Processing","text":"

          This section shows how to handle the returned result sets for different types of SELECT statements.

          The result set format and its handling varies slightly depending on the type of SelectResult statements used. The result set formats you may encounter include those generated by:

          • SelectResult.all() \u2014 see All Properties
          • SelectResult.property(\"name\") \u2014 see Specific Properties
          • SelectResult.expression(Meta.id) \u2014 Metadata (such as the _id) \u2014 see Document ID Only
          • SelectResult.expression(Function.count(Expression.all())).as(\"mycount\") \u2014 see Select Count-only

          To process the results of a query, you first need to execute it using Query.execute().

          The execution of a Kotbase database query typically returns an array of results, a result set.

          • The result set of an aggregate, count-only, query is a key-value pair \u2014 see Select Count-only \u2014 which you can access using the count name as its key.
          • The result set of a query returning document properties is an array. Each array row represents the data from a document that matched your search criteria (the WHERE statements). The composition of each row is determined by the combination of SelectResult expressions provided in the SELECT statement. To unpack these result sets you need to iterate this array.
          "},{"location":"query-builder/#select-all-properties","title":"Select All Properties","text":""},{"location":"query-builder/#query","title":"Query","text":"

          The Select statement for this type of query, returns all document properties for each document matching the query criteria \u2014 see Example 17.

          Example 17. Query selecting All Properties

          val query = QueryBuilder.select(SelectResult.all())\n    .from(DataSource.collection(collection))\n
          "},{"location":"query-builder/#result-set-format","title":"Result Set Format","text":"

          The result set returned by queries using SelectResult.all() is an array of dictionary objects \u2014 one for each document matching the query criteria.

          For each result object, the key is the database name and the value is a dictionary representing each document property as a key-value pair \u2014 see Example 18.

          Example 18. Format of Result Set (All Properties)

          [\n  {\n    \"travel-sample\": { // The result for the first document matching the query criteria.\n      \"callsign\": \"MILE-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"Q5\",\n      \"icao\": \"MLA\",\n      \"id\": 10,\n      \"name\": \"40-Mile Air\",\n      \"type\": \"airline\"\n    }\n  },\n  {\n    \"travel-sample\": { // The result for the next document matching the query criteria.\n      \"callsign\": \"ALASKAN-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"AA\",\n      \"icao\": \"AAA\",\n      \"id\": 10,\n      \"name\": \"Alaskan Airways\",\n      \"type\": \"airline\"\n    }\n  }\n]\n
          "},{"location":"query-builder/#result-set-access","title":"Result Set Access","text":"

          In this case access the retrieved document properties by converting each row\u2019s value, in turn, to a dictionary \u2014 as shown in Example 19.

          Example 19. Using Document Properties (All)

          val hotels = mutableMapOf<String, Hotel>()\nquery.execute().use { rs ->\n    rs.allResults().forEach {\n        // get the k-v pairs from the 'hotel' key's value into a dictionary\n        val docProps = it.getDictionary(0) \n        val docId = docProps!!.getString(\"id\")\n        val docName = docProps.getString(\"name\")\n        val docType = docProps.getString(\"type\")\n        val docCity = docProps.getString(\"city\")\n\n        // Alternatively, access results value dictionary directly\n        val id = it.getDictionary(0)?.getString(\"id\")!!\n        hotels[id] = Hotel(\n            id,\n            it.getDictionary(0)?.getString(\"type\"),\n            it.getDictionary(0)?.getString(\"name\"),\n            it.getDictionary(0)?.getString(\"city\"),\n            it.getDictionary(0)?.getString(\"country\"),\n            it.getDictionary(0)?.getString(\"description\")\n        )\n    }\n}\n
          "},{"location":"query-builder/#select-specific-properties","title":"Select Specific Properties","text":""},{"location":"query-builder/#query_1","title":"Query","text":"

          Here we use SelectResult.property(\"<property-name>\") to specify the document properties we want our query to return \u2014 see Example 20.

          Example 20. Query selecting Specific Properties

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"country\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n
          "},{"location":"query-builder/#result-set-format_1","title":"Result Set Format","text":"

          The result set returned when selecting only specific document properties is an array of dictionary objects \u2014 one for each document matching the query criteria.

          Each result object comprises a key-value pair for each selected document property \u2014 see Example 21.

          Example 21. Format of Result Set (Specific Properties)

          [\n  { // The result for the first document matching the query criteria.\n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { // The result for the next document matching the query criteria.\n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\",\n  }\n]\n
          "},{"location":"query-builder/#result-set-access_1","title":"Result Set Access","text":"

          Access the retrieved properties by converting each row into a dictionary \u2014 as shown in Example 22.

          Example 22. Using Returned Document Properties (Specific Properties)

          query.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"Hotel name -> ${it.getString(\"name\")}, in ${it.getString(\"country\")}\")\n    }\n}\n
          "},{"location":"query-builder/#select-document-id-only","title":"Select Document ID Only","text":""},{"location":"query-builder/#query_2","title":"Query","text":"

          You would typically use this type of query if retrieval of document properties directly would consume excessive amounts of memory and-or processing time \u2014 see Example 23.

          Example 23. Query selecting only Doc ID

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id).`as`(\"hotelId\")\n    )\n    .from(DataSource.collection(collection))\n
          "},{"location":"query-builder/#result-set-format_2","title":"Result Set Format","text":"

          The result set returned by queries using a SelectResult expression of the form SelectResult.expression(Meta.id) is an array of dictionary objects \u2014 one for each document matching the query criteria. Each result object has id as the key and the ID value as its value \u2014 see Example 24.

          Example 24. Format of Result Set (Doc ID only)

          [\n  {\n    \"id\": \"hotel123\"\n  },\n  {\n    \"id\": \"hotel456\"\n  }\n]\n
          "},{"location":"query-builder/#result-set-access_2","title":"Result Set Access","text":"

          In this case, access the required document\u2019s properties by unpacking the id and using it to get the document from the database \u2014 see Example 25.

          Example 25. Using Returned Document Properties (Document ID)

          query.execute().use { rs ->\n    rs.allResults().forEach {\n        // Extract the ID value from the dictionary\n        it.getString(\"hotelId\")?.let { hotelId ->\n            println(\"hotel id -> $hotelId\")\n            // use the ID to get the document from the database\n            val doc = collection.getDocument(hotelId)\n        }\n    }\n}\n
          "},{"location":"query-builder/#select-count-only","title":"Select Count-only","text":""},{"location":"query-builder/#query_3","title":"Query","text":"

          Example 26. Query selecting a Count-only

          val query = QueryBuilder\n    .select(\n        SelectResult.expression(Function.count(Expression.string(\"*\"))).`as`(\"mycount\")\n    ) \n    .from(DataSource.collection(collection))\n

          The alias name, mycount, is used to access the count value.

          "},{"location":"query-builder/#result-set-format_3","title":"Result Set Format","text":"

          The result set returned by a count such as Select.expression(Function.count(Expression.all))) is a key-value pair. The key is the count name, as defined using SelectResult.as() \u2014 see Example 27 for the format and Example 26 for the query.

          Example 27. Format of Result Set (Count)

          {\n  \"mycount\": 6\n}\n

          The key-value pair returned by a count.

          "},{"location":"query-builder/#result-set-access_3","title":"Result Set Access","text":"

          Access the count using its alias name (mycount in this example) \u2014 see Example 28.

          Example 28. Using Returned Document Properties (Count)

          query.execute().use { rs ->\n    rs.allResults().forEach {\n        printlnt(\"name -> ${it.getInt(\"mycount\")}\")\n    }\n}\n

          Get the count using the SelectResult.as() alias, which is used as its key.

          "},{"location":"query-builder/#handling-pagination","title":"Handling Pagination","text":"

          One way to handle pagination in high-volume queries is to retrieve the results in batches. Use the limit and offset feature, to return a defined number of results starting from a given offset \u2014 see Example 29.

          Example 29. Query Pagination

          val thisOffset = 0\nval thisLimit = 20\nval query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .limit(\n        Expression.intValue(thisLimit),\n        Expression.intValue(thisOffset)\n    ) \n

          Return a maximum of limit results starting from result number offset.

          Tip

          The Kotbase paging extensions provide a PagingSource to use with AndroidX Paging to assist loading and displaying pages of data in your app.

          Tip

          For more on using the QueryBuilder API, see our blog: Introducing the Query Interface in Couchbase Mobile

          "},{"location":"query-builder/#json-result-sets","title":"JSON Result Sets","text":"

          Kotbase provides a convenience API to convert query results to JSON strings.

          Use Result.toJSON() to transform your result into a JSON string, which can easily be serialized or used as required in your application. See Example 30 for a working example using kotlinx-serialization.

          Example 30. Using JSON Results

          // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval hotels = mutableListOf<Hotel>()\n\nval query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"type\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n\nquery.execute().use { rs ->\n    rs.forEach {\n\n        // Get result as JSON string\n        val json = it.toJSON()\n\n        // Get JsonObject map from JSON string\n        val mapFromJsonString = Json.decodeFromString<JsonObject>(json)\n\n        // Use created JsonObject map\n        val hotelId = mapFromJsonString[\"id\"].toString()\n        val hotelType = mapFromJsonString[\"type\"].toString()\n        val hotelName = mapFromJsonString[\"name\"].toString()\n\n        // Get custom object from JSON string\n        val hotel = Json.decodeFromString<Hotel>(json)\n        hotels.add(hotel)\n    }\n}\n
          "},{"location":"query-builder/#json-string-format","title":"JSON String Format","text":"

          If your query selects ALL then the JSON format will be:

          {\n  database-name: {\n    key1: \"value1\",\n    keyx: \"valuex\"\n  }\n}\n

          If your query selects a sub-set of available properties then the JSON format will be:

          {\n  key1: \"value1\",\n  keyx: \"valuex\"\n}\n
          "},{"location":"query-builder/#predictive-query","title":"Predictive Query","text":"

          This is an Enterprise Edition feature.

          Predictive Query enables Couchbase Lite queries to use machine learning, by providing query functions that can process document data (properties or blobs) via trained ML models.

          Let\u2019s consider an image classifier model that takes a picture as input and outputs a label and probability.

          To run a predictive query with a model as the one shown above, you must implement the following steps:

          1. Integrate the Model
          2. Register the Model
          3. Create an Index (Optional)
          4. Run a Prediction Query
          5. Deregister the Model
          "},{"location":"query-builder/#integrate-the-model","title":"Integrate the Model","text":"

          To integrate a model with Couchbase Lite, you must implement the PredictiveModel interface which has only one function called predict() \u2014 see Example 31.

          Example 31. Integrating a predictive model

          // tensorFlowModel is a fake implementation\nobject TensorFlowModel {\n    fun predictImage(data: ByteArray?): Map<String, Any?> = TODO()\n}\n\nobject ImageClassifierModel : PredictiveModel {\n    const val name = \"ImageClassifier\"\n\n    // this would be the implementation of the ml model you have chosen\n    override fun predict(input: Dictionary) = input.getBlob(\"photo\")?.let {\n        MutableDictionary(TensorFlowModel.predictImage(it.content)) \n    }\n}\n

          The predict(input) -> output method provides the input and expects the result of using the machine learning model. The input and output of the predictive model is a Dictionary. Therefore, the supported data type will be constrained by the data type that the Dictionary supports.

          "},{"location":"query-builder/#register-the-model","title":"Register the Model","text":"

          To register the model you must create a new instance and pass it to the Database.prediction.registerModel() static method.

          Example 32. Registering a predictive model

          Database.prediction.registerModel(\"ImageClassifier\", ImageClassifierModel)\n
          "},{"location":"query-builder/#create-an-index","title":"Create an Index","text":"

          Creating an index for a predictive query is highly recommended. By computing the predictions during writes and building a prediction index, you can significantly improve the speed of prediction queries (which would otherwise have to be computed during reads).

          There are two types of indexes for predictive queries:

          • Value Index
          • Predictive Index
          "},{"location":"query-builder/#value-index","title":"Value Index","text":"

          The code below creates a value index from the \"label\" value of the prediction result. When documents are added or updated, the index will call the prediction function to update the label value in the index.

          Example 33. Creating a value index

          database.createIndex(\n    \"value-index-image-classifier\",\n    IndexBuilder.valueIndex(ValueIndexItem.expression(Expression.property(\"label\")))\n)\n
          "},{"location":"query-builder/#predictive-index","title":"Predictive Index","text":"

          Predictive Index is a new index type used for predictive query. It differs from the value index in that it caches the predictive results and creates a value index from that cache when the predictive results values are specified.

          Example 34. Creating a predictive index

          Here we create a predictive index from the label value of the prediction result.

          val inputMap: Map<String, Any?> = mapOf(\"numbers\" to Expression.property(\"photo\"))\ncollection.createIndex(\n    \"predictive-index-image-classifier\",\n    IndexBuilder.predictiveIndex(\"ImageClassifier\", Expression.map(inputMap), null)\n)\n
          "},{"location":"query-builder/#run-a-prediction-query","title":"Run a Prediction Query","text":"

          The code below creates a query that calls the prediction function to return the \"label\" value for the first 10 results in the database.

          Example 35. Creating a value index

          val inputMap: Map<String, Any?> = mapOf(\"photo\" to Expression.property(\"photo\"))\nval prediction: PredictionFunction = Function.prediction(\n    ImageClassifierModel.name,\n    Expression.map(inputMap)\n)\n\nval query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        prediction.propertyPath(\"label\").equalTo(Expression.string(\"car\"))\n            .and(\n                prediction.propertyPath(\"probability\")\n                    .greaterThanOrEqualTo(Expression.doubleValue(0.8))\n            )\n    )\n\nquery.execute().use {\n    println(\"Number of rows: ${it.allResults().size}\")\n}\n

          The PredictiveModel.predict() method returns a constructed PredictionFunction object which can be used further to specify a property value extracted from the output dictionary of the PredictiveModel.predict() function.

          Note

          The null value returned by the prediction method will be interpreted as MISSING value in queries.

          "},{"location":"query-builder/#deregister-the-model","title":"Deregister the Model","text":"

          To deregister the model you must call the Database.prediction.unregisterModel() static method.

          Example 36. Deregister a value index

          Database.prediction.unregisterModel(\"ImageClassifier\")\n
          "},{"location":"query-result-sets/","title":"Query Result Sets","text":"

          How to use Couchbase Lite Query\u2019s Result Sets

          "},{"location":"query-result-sets/#query-execution","title":"Query Execution","text":"

          The execution of a Couchbase Lite database query returns an array of results, a result set.

          Each row of the result set represents the data returned from a document that met the conditions defined by the WHERE statement of your query. The composition of each row is determined by the SelectResult expressions provided in the SELECT statement.

          "},{"location":"query-result-sets/#returned-results","title":"Returned Results","text":"

          Return All Document Properties | Return Document ID Only | Return Specific Properties Only

          The types of SelectResult formats you may encounter include those generated by :

          • QueryBuilder.select(SelectResult.all()) \u2014 Using All
          • QueryBuilder.select(SelectResult.expression(Meta.id)) \u2014 Using Doc ID Metadata such as the _id
          • QueryBuilder.select(SelectResult.property(\"myProp\")) \u2014 Using Specific Properties
          "},{"location":"query-result-sets/#return-all-document-properties","title":"Return All Document Properties","text":"

          The SelectResult returned by SelectResult.all() is a dictionary object, with the database name as the key and the document properties as an array of key-value pairs.

          Example 1. Returning All Properties

          [\n  {\n    \"travel-sample\": { \n      \"callsign\": \"MILE-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"Q5\",\n      \"icao\": \"MLA\",\n      \"id\": 10,\n      \"name\": \"40-Mile Air\",\n      \"type\": \"airline\"\n    }\n  },\n  {\n    \"travel-sample\": { \n      \"callsign\": \"ALASKAN-AIR\",\n      \"country\": \"United States\",\n      \"iata\": \"AA\",\n      \"icao\": \"AAA\",\n      \"id\": 10,\n      \"name\": \"Alaskan Airways\",\n      \"type\": \"airline\"\n    }\n  }\n]\n
          "},{"location":"query-result-sets/#return-document-id-only","title":"Return Document ID Only","text":"

          The SelectResult returned by queries using a SelectResult expression of the form SelectResult.expression(Meta.id) comprises a dictionary object with id as the key and the ID value as the value.

          Example 2. Returning Meta Properties \u2014 Document ID

          [\n  {\n    \"id\": \"hotel123\"\n  },\n  {\n    \"id\": \"hotel456\"\n  }\n]\n
          "},{"location":"query-result-sets/#return-specific-properties-only","title":"Return Specific Properties Only","text":"

          The SelectResult returned by queries using one or more SelectResult expressions of the form SelectResult.expression(property(\"name\")) comprises a key-value pair for each SelectResult expression in the query, the key being the property name.

          Example 3. Returning Specific Properties

          [\n  { \n    \"id\": \"hotel123\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Ghia\"\n  },\n  { \n    \"id\": \"hotel456\",\n    \"type\": \"hotel\",\n    \"name\": \"Hotel Deluxe\",\n  }\n]\n
          "},{"location":"query-result-sets/#processing-results","title":"Processing Results","text":"

          Access Document Properties \u2014 All Properties | Access Document Properties \u2014 ID | Access Document Properties \u2014 Selected Properties

          To retrieve the results of your query, you need to execute it using Query.execute().

          The output from the execution is an array, with each array element representing the data from a document that matched your search criteria.

          To unpack the results you need to iterate through this array. Alternatively, you can convert the result to a JSON string \u2014 see: JSON Result Sets

          "},{"location":"query-result-sets/#access-document-properties-all-properties","title":"Access Document Properties - All Properties","text":"

          Here we look at how to access document properties when you have used SelectResult.all().

          In this case each array element is a dictionary structure with the database name as its key. The properties are presented in the value as an array of key-value pairs (property name/property value).

          You access the retrieved document properties by converting each row\u2019s value, in turn, to a dictionary \u2014 as shown in Example 4.

          Example 4. Access All Properties

          val hotels = mutableMapOf<String, Hotel>()\nquery.execute().use { rs ->\n    rs.allResults().forEach {\n        // get the k-v pairs from the 'hotel' key's value into a dictionary\n        val docProps = it.getDictionary(0)\n        val docId = docProps!!.getString(\"id\")\n        val docType = docProps.getString(\"type\")\n        val docName = docProps.getString(\"name\")\n        val docCity = docProps.getString(\"city\")\n\n        // Alternatively, access results value dictionary directly\n        val id = it.getDictionary(0)?.getString(\"id\")\n        hotels[id] = Hotel(\n            id,\n            it.getDictionary(0)?.getString(\"type\"),\n            it.getDictionary(0)?.getString(\"name\"),\n            it.getDictionary(0)?.getString(\"city\"),\n            it.getDictionary(0)?.getString(\"country\"),\n            it.getDictionary(0)?.getString(\"description\")\n        )\n    }\n}\n
          "},{"location":"query-result-sets/#access-document-properties-id","title":"Access Document Properties - ID","text":"

          Here we look at how to access document properties when you have returned only the document IDs for documents that matched your selection criteria.

          This is something you may do when retrieval of the properties directly by the query may consume excessive amounts of memory and-or processing time.

          In this case each array element is a dictionary structure where id is the key and the required document ID is the value.

          Access the required document properties by retrieving the document from the database using its document ID \u2014 as shown in Example 5.

          Example 5. Access by ID

          query.execute().use { rs ->\n    rs.allResults().forEach {\n        // Extract the ID value from the dictionary\n        it.getString(\"id\")?.let { hotelId ->\n            println(\"hotel id -> $hotelId\")\n            // use the ID to get the document from the database\n            val doc = collection.getDocument(hotelId)\n        }\n    }\n}\n
          "},{"location":"query-result-sets/#access-document-properties-selected-properties","title":"Access Document Properties - Selected Properties","text":"

          Here we look at how to access properties when you have used SelectResult to get a specific subset of properties.

          In this case each array element is an array of key value pairs (property name/property value).

          Access the retrieved properties by converting each row into a dictionary \u2014 as shown in Example 6.

          Example 6. Access Selected Properties

          query.execute().use { rs ->\n    rs.allResults().forEach {\n        println(\"Hotel name -> ${it.getString(\"name\")}, in ${it.getString(\"country\")}\")\n    }\n}\n
          "},{"location":"query-result-sets/#json-result-sets","title":"JSON Result Sets","text":"

          Use Result.toJSON() to transform your result into a JSON string, which can easily be serialized or used as required in your application. See Example 7 for a working example using kotlinx-serialization.

          Example 7. Using JSON Results

          // Uses kotlinx-serialization JSON processor\n@Serializable\ndata class Hotel(val id: String, val type: String, val name: String)\n\nval hotels = mutableListOf<Hotel>()\n\nval query = QueryBuilder\n    .select(\n        SelectResult.expression(Meta.id),\n        SelectResult.property(\"type\"),\n        SelectResult.property(\"name\")\n    )\n    .from(DataSource.collection(collection))\n\nquery.execute().use { rs ->\n    rs.forEach {\n\n        // Get result as JSON string\n        val json = it.toJSON()\n\n        // Get JsonObject map from JSON string\n        val mapFromJsonString = Json.decodeFromString<JsonObject>(json)\n\n        // Use created JsonObject map\n        val hotelId = mapFromJsonString[\"id\"].toString()\n        val hotelType = mapFromJsonString[\"type\"].toString()\n        val hotelName = mapFromJsonString[\"name\"].toString()\n\n        // Get custom object from JSON string\n        val hotel = Json.decodeFromString<Hotel>(json)\n        hotels.add(hotel)\n    }\n}\n
          "},{"location":"query-result-sets/#json-string-format","title":"JSON String Format","text":"

          If your query selects ALL then the JSON format will be:

          {\n  database-name: {\n    key1: \"value1\",\n    keyx: \"valuex\"\n  }\n}\n

          If your query selects a sub-set of available properties then the JSON format will be:

          {\n  key1: \"value1\",\n  keyx: \"valuex\"\n}\n
          "},{"location":"query-troubleshooting/","title":"Query Troubleshooting","text":"

          How to use the Couchbase Lite Query API\u2019s explain() method to examine a query

          "},{"location":"query-troubleshooting/#query-explain","title":"Query Explain","text":""},{"location":"query-troubleshooting/#using","title":"Using","text":"

          Query\u2019s explain() method can provide useful insight when you are trying to diagnose query performance issues and-or optimize queries. To examine how your query is working, either embed the call inside your app (see Example 1), or use it interactively within a cblite shell (see Example 2).

          Example 1. Using Query Explain in App

          val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"university\")))\n    .groupBy(Expression.property(\"country\"))\n    .orderBy(Ordering.property(\"name\").descending()) \n\nprintln(query.explain())\n
          1. Construct your query as normal
          2. Call the query\u2019s explain method; all output is sent to the application\u2019s console log.

          Example 2. Using Query Explain in cblite

          cblite <your-database-name>.cblite2 \n\n(cblite) select --explain domains group by country order by country, name \n\n(cblite) query --explain {\"GROUP_BY\":[[\".country\"]],\"ORDER_BY\":[[\".country\"],[\".name\"]],\"WHAT\":[[\".domains\"]]} \n
          1. Within a terminal session open your database with cblite and enter your query
          2. Here the query is entered as a N1QL-query using select
          3. Here the query is entered as a JSON-string using query
          "},{"location":"query-troubleshooting/#output","title":"Output","text":"

          The output from explain() remains the same whether invoked by an app, or cblite\u2014see Example 3 for an example of how it looks.

          Example 3. Query.explain() Output

          SELECT fl_result(fl_value(_doc.body, 'domains')) FROM kv_default AS _doc WHERE (_doc.flags & 1 = 0) GROUP BY fl_value(_doc.body, 'country') ORDER BY fl_value(_doc.body, 'country'), fl_value(_doc.body, 'name')\n\n7|0|0| SCAN TABLE kv_default AS _doc\n12|0|0| USE TEMP B-TREE FOR GROUP BY\n52|0|0| USE TEMP B-TREE FOR ORDER BY\n\n{\"GROUP_BY\":[[\".country\"]],\"ORDER_BY\":[[\".country\"],[\".name\"]],\"WHAT\":[[\".domains\"]]}\n

          This output (Example 3) comprises three main elements:

          1. The translated SQL-query, which is not necessarily useful, being aimed more at Couchbase support and-or engineering teams.
          2. The SQLite query plan, which gives a high-level view of how the SQL query will be implemented. You can use this to identify potential issues and so optimize problematic queries.
          3. The query in JSON-string format, which you can copy-and-paste directly into the cblite tool.
          "},{"location":"query-troubleshooting/#the-query-plan","title":"The Query Plan","text":""},{"location":"query-troubleshooting/#format","title":"Format","text":"

          The query plan section of the output displays a tabular form of the translated query\u2019s execution plan. It primarily shows how the data will be retrieved and, where appropriate, how it will be sorted for navigation and-or presentation purposes. For more on SQLite\u2019s Explain Query Plan \u2014 see SQLite Explain Query Plan.

          Example 4. A Query Plan

          7|0|0| SCAN TABLE kv_default AS _doc\n12|0|0| USE TEMP B-TREE FOR GROUP BY\n52|0|0| USE TEMP B-TREE FOR ORDER BY\n
          1. Retrieval method \u2014 This line shows the retrieval method being used for the query; here a sequential read of the database. Something you may well be looking to optimize \u2014 see Retrieval Method for more.
          2. Grouping method \u2014 This line shows that the Group By clause used in the query requires the data to be sorted and that a b-tree will be used for temporary storage \u2014 see Order and Group.
          3. Ordering method \u2014 This line shows that the Order By clause used in the query requires the data to be sorted and that a b-tree will be used for temporary storage \u2014 see Order and Group.
          "},{"location":"query-troubleshooting/#retrieval-method","title":"Retrieval Method","text":"

          The query optimizer will attempt to retrieve the requested data items as efficiently as possible, which generally will be by using one or more of the available indexes. The retrieval method shows the approach decided upon by the optimizer \u2014 see Table 1.

          Table 1. Retrieval methods

          Retrieval Method Description Search Here the query is able to access the required data directly using keys into the index. Queries using the Search mode are the fastest. Scan Index Here the query is able to retrieve the data by scanning all or part-of the index (for example when seeking to match values within a range). This type of query is slower than search, but at least benefits from the compact and ordered form of the index. Scan Table Here the query must scan the database table(s) to retrieve the required data. It is the slowest of these methods and will benefit most from some form of optimization.

          When looking to optimize a query\u2019s retrieval method, consider whether:

          • Providing an additional index makes sense
          • You could use an existing index \u2014 perhaps by restructuring the query to minimize wildcard use, or the reliance on functions that modify the query\u2019s interpretation of index keys (for example, lower())
          • You could reduce the data set being requested to minimize the query\u2019s footprint on the database
          "},{"location":"query-troubleshooting/#order-and-group","title":"Order and Group","text":"

          The Use temp b-tree for lines in the example indicate that the query requires sorting to cater for grouping and then sorting again to present the output results. Minimizing, if not eliminating, this ordering and re-ordering will obviously reduce the amount of time taken to process your query.

          Ask \"is the grouping and-or ordering absolutely necessary?\": if it isn\u2019t, drop it or modify it to minimize its impact.

          "},{"location":"query-troubleshooting/#queries-and-indexes","title":"Queries and Indexes","text":"

          Querying documents using a pre-existing database index is much faster because an index narrows down the set of documents to examine.

          When planning the indexes you need for your database, remember that while indexes make queries faster, they may also:

          • Make writes slightly slower, because each index must be updated whenever a document is updated
          • Make your Couchbase Lite database slightly larger.

          Too many indexes may hurt performance. Optimal performance depends on designing and creating the right indexes to go along with your queries.

          Constraints

          Couchbase Lite does not currently support partial value indexes; indexes with non-property expressions. You should only index with properties that you plan to use in the query.

          The query optimizer converts your query into a parse tree that groups zero or more and-connected clauses together (as dictated by your where conditionals) for effective query engine processing.

          Ideally a query will be able to satisfy its requirements entirely by either directly accessing the index or searching sequential index rows. Less good is if the query must scan the whole index; although the compact nature of most indexes means this is still much faster than the alternative of scanning the entire database with no help from the indexes at all.

          Searches that begin with or rely upon an inequality with the primary key are inherently less effective than those using a primary key equality.

          "},{"location":"query-troubleshooting/#working-with-the-query-optimizer","title":"Working with the Query Optimizer","text":"

          You may have noticed that sometimes a query runs faster on a second run, or after re-opening the database, or after deleting and recreating an index. This typically happens when SQL Query Optimizer has gathered sufficient stats to recognize a means of optimizing a suboptimal query.

          If only those stats were available from the start. In fact, they are gathered after certain events, such as:

          • Following index creation
          • On a database close
          • When running a database compact

          So, if your analysis of the Query Explain output indicates a suboptimal query and your rewrites fail to sufficiently optimize it, consider compacting the database. Then re-generate the Query Explain and note any improvements in optimization. They may not, in themselves, resolve the issue entirely; but they can provide a useful guide toward further optimizing changes you could make.

          "},{"location":"query-troubleshooting/#wildcard-and-like-based-queries","title":"Wildcard and Like-based Queries","text":"

          Like-based searches can use the index(es) only if:

          • The search-string doesn\u2019t start with a wildcard
          • The primary search expression uses a property that is an indexed key
          • The search-string is a constant known at run time (that is, not a value derived during processing of the query)

          To illustrate this we can use a modified query from the Mobile Travel Sample application; replacing a simple equality test with a LIKE.

          In Example 5 we use a wildcard prefix and suffix. You can see that the query plan decides on a retrieval method of Scan Table.

          Tip

          For more on indexes \u2014 see Indexing

          Example 5. Like with Wildcard Prefix

          val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").like(Expression.string(\"%hotel%\"))\n            .and(Expression.property(\"name\").like(Expression.string(\"%royal%\")))\n    )\nprintln(query.explain())\n

          The indexed property, type, cannot use its index because of the wildcard prefix.

          Resulting Query Plan
          2|0|0| SCAN TABLE kv_default AS _doc\n

          By contrast, by removing the wildcard prefix % (in Example 6), we see that the query plan\u2019s retrieval method changes to become an index search. Where practical, simple changes like this can make significant differences in query performance.

          Example 6. Like with No Wildcard-prefix

          val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(\n        Expression.property(\"type\").like(Expression.string(\"hotel%\"))\n            .and(Expression.property(\"name\").like(Expression.string(\"%royal%\")))\n    )\nprintln(query.explain())\n

          Simply removing the wildcard prefix enables the query optimizer to access the typeIndex, which results in a more efficient search.

          Resulting Query Plan
          3|0|0| SEARCH TABLE kv_default AS _doc USING INDEX typeIndex (<expr>>? AND <expr><?)\n
          "},{"location":"query-troubleshooting/#use-functions-wisely","title":"Use Functions Wisely","text":"

          Functions are a very useful tool in building queries, but be aware that they can impact whether the query-optimizer is able to use your index(es).

          For example, you can observe a similar situation to that shown in Wildcard and Like-based Queries when using the lower() function on an indexed property.

          Query
          val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Function.lower(Expression.property(\"type\")).equalTo(Expression.string(\"hotel\")))\nprintln(query.explain())\n

          Here we use the lower() function in the Where expression

          Query Plan
          2|0|0| SCAN TABLE kv_default AS _doc\n

          But removing the lower() function, changes things:

          Query
          val query = QueryBuilder\n    .select(SelectResult.all())\n    .from(DataSource.collection(collection))\n    .where(Expression.property(\"type\").equalTo(Expression.string(\"hotel\"))) \nprintln(query.explain())\n

          Here we have removed lower() from the Where expression

          Query Plan
          3|0|0| SEARCH TABLE kv_default AS _doc USING INDEX typeIndex (<expr>=?)\n

          Knowing this, you can consider how you create the index; for example, using lower() when you create the index and then always using lowercase comparisons.

          "},{"location":"query-troubleshooting/#optimization-considerations","title":"Optimization Considerations","text":"

          Try to minimize the amount of data retrieved. Reduce it down to the few properties you really do need to achieve the required result.

          Consider fetching details lazily. You could break complex queries into components. Returning just the doc-ids, then process the array of doc-ids using either the Document API or a query that uses the array of doc-ids to return information.

          Consider using paging to minimize the data returned when the number of results returned is expected to be high. Getting the whole lot at once will be slow and resource intensive. Plus does anyone want to access them all in one go? Instead, retrieve batches of information at a time, perhaps using the LIMIT/OFFSET feature to set a starting point for each subsequent batch. Although, note that using query offsets becomes increasingly less effective as the overhead of skipping a growing number of rows each time increases. You can work around this, by instead using ranges of search-key values. If the last search-key value of batch one was 'x' then that could become the starting point for your next batch and-so-on.

          Optimize document size in design. Smaller docs load more quickly. Break your data into logical linked units.

          Consider Using Full Text Search instead of complex like or regex patterns \u2014 see Full Text Search.

          "},{"location":"remote-sync-gateway/","title":"Remote Sync Gateway","text":"

          Couchbase Lite \u2014 Synchronizing data changes between local and remote databases using Sync Gateway

          Android enablers

          Allow Unencrypted Network Traffic

          To use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

          Use Background Threads

          As with any network or file I/O activity, Couchbase Lite activities should not be performed on the UI thread. Always use a background thread.

          Code Snippets

          All code examples are indicative only. They demonstrate the basic concepts and approaches to using a feature. Use them as inspiration and adapt these examples to best practice when developing applications for your platform.

          "},{"location":"remote-sync-gateway/#introduction","title":"Introduction","text":"

          Couchbase Lite provides API support for secure, bi-directional, synchronization of data changes between mobile applications and a central server database. It does so by using a replicator to interact with Sync Gateway.

          The replicator is designed to manage replication of documents and-or document changes between a source and a target database. For example, between a local Couchbase Lite database and remote Sync Gateway database, which is ultimately mapped to a bucket in a Couchbase Server instance in the cloud or on a server.

          This page shows sample code and configuration examples covering the implementation of a replication using Sync Gateway.

          Your application runs a replicator (also referred to here as a client), which will initiate connection with a Sync Gateway (also referred to here as a server) and participate in the replication of database changes to bring both local and remote databases into sync.

          Subsequent sections provide additional details and examples for the main configuration options.

          "},{"location":"remote-sync-gateway/#replication-concepts","title":"Replication Concepts","text":"

          Couchbase Lite allows for one database for each application running on the mobile device. This database can contain one or more scopes. Each scope can contain one or more collections.

          To learn about Scopes and Collections, see Databases.

          You can set up a replication scheme across these data levels:

          Database The _default collection is synced.

          Collection A specific collection or a set of collections is synced.

          As part of the syncing setup, the Sync Gateway has to map the Couchbase Lite database to the Couchbase Server or Capella database being synced.

          "},{"location":"remote-sync-gateway/#replication-protocol","title":"Replication Protocol","text":""},{"location":"remote-sync-gateway/#scheme","title":"Scheme","text":"

          Couchbase Mobile uses a replication protocol based on WebSockets for replication. To use this protocol the replication URL should specify WebSockets as the URL scheme (see the Configure Target section below).

          "},{"location":"remote-sync-gateway/#ordering","title":"Ordering","text":"

          To optimize for speed, the replication protocol doesn\u2019t guarantee that documents will be received in a particular order. So we don\u2019t recommend to rely on that when using the replication or database change listeners for example.

          "},{"location":"remote-sync-gateway/#scopes-and-collections","title":"Scopes and Collections","text":"

          Scopes and Collections allow you to organize your documents in Couchbase Lite.

          When syncing, you can configure the collections to be synced.

          The collections specified in the Couchbase Lite replicator setup must exist (both scope and collection name must be identical) on the Sync Gateway side, otherwise starting the Couchbase Lite replicator will result in an error.

          During replication:

          1. If Sync Gateway config (or server) is updated to remove a collection that is being synced, the client replicator will be offline and will be stopped after the first retry. An error will be reported.
          2. If Sync Gateway config is updated to add a collection to a scope that is being synchronized, the replication will ignore the collection. The added collection will not automatically sync until the Couchbase Lite replicator\u2019s configuration is updated.
          "},{"location":"remote-sync-gateway/#default-collection","title":"Default Collection","text":"

          When upgrading Couchbase Lite to 3.1, the existing documents in the database will be automatically migrated to the default collection.

          For backward compatibility with the code prior to 3.1, when you set up the replicator with the database, the default collection will be set up to sync with the default collection on Sync Gateway.

          Sync Couchbase Lite database with the default collection on Sync Gateway

          Sync Couchbase Lite default collection with default collection on Sync Gateway

          "},{"location":"remote-sync-gateway/#user-defined-collections","title":"User-Defined Collections","text":"

          The user-defined collections specified in the Couchbase Lite replicator setup must exist (and be identical) on the Sync Gateway side to sync.

          Syncing scope with user-defined collections

          Syncing scope with user-defined collections. Couchbase Lite has more collections than the Sync Gateway configuration (with collection filters)

          "},{"location":"remote-sync-gateway/#configuration-summary","title":"Configuration Summary","text":"

          You should configure and initialize a replicator for each Couchbase Lite database instance you want to sync. Example 1 shows the configuration and initialization process.

          Note

          You need Couchbase Lite 3.1+ and Sync Gateway 3.1+ to use custom Scopes and Collections. If you\u2019re using Capella App Services or Sync Gateway releases that are older than version 3.1, you won\u2019t be able to access custom Scopes and Collections. To use Couchbase Lite 3.1+ with these older versions, you can use the default Collection as a backup option.

          Example 1. Replication configuration and initialization

          val repl = Replicator(\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(collections to null),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // set auto-purge behavior\n        // (here we override default)\n        enableAutoPurge = false,\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Optionally add a change listener\nval token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code ::  ${err.code}\\n$err\")\n    }\n}\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n

          Notes on Example

          1. Get endpoint for target database.
          2. Use the ReplicatorConfiguration class\u2019s constructor \u2014 ReplicatorConfiguration(Endpoint) \u2014 to initialize the replicator configuration \u2014 see also Configure Target.
          3. The default is to auto-purge documents that this user no longer has access to \u2014 see Auto-purge on Channel Access Revocation. Here we override this behavior by setting its flag to false.
          4. Configure how the client will authenticate the server. Here we say connect only to servers presenting a self-signed certificate. By default, clients accept only servers presenting certificates that can be verified using the OS bundled Root CA Certificates \u2014 see Server Authentication.
          5. Configure the client-authentication credentials (if required). These are the credential the client will present to sync gateway if requested to do so. Here we configure to provide Basic Authentication credentials. Other options are available \u2014 see Client Authentication.
          6. Configure how the replication should handle conflict resolution \u2014 see Handling Data Conflicts topic for mor on conflict resolution.
          7. Initialize the replicator using your configuration \u2014 see Initialize.
          8. Optionally, register an observer, which will notify you of changes to the replication status \u2014 see Monitor .
          9. Start the replicator \u2014 see Start Replicator.
          "},{"location":"remote-sync-gateway/#configure","title":"Configure","text":"

          In this section Configure Target | Sync Mode | Retry Configuration | User Authorization | Server Authentication | Client Authentication | Monitor Document Changes | Custom Headers | Checkpoint Starts | Replication Filters | Channels | Auto-purge on Channel Access Revocation | Delta Sync

          "},{"location":"remote-sync-gateway/#configure-target","title":"Configure Target","text":"

          Initialize and define the replication configuration with local and remote database locations using the ReplicatorConfiguration object.

          The constructor provides the server\u2019s URL (including the port number and the name of the remote database to sync with).

          It is expected that the app will identify the IP address and URL and append the remote database name to the URL endpoint, producing for example: wss://10.0.2.2:4984/travel-sample.

          The URL scheme for web socket URLs uses ws: (non-TLS) or wss: (SSL/TLS) prefixes.

          Note

          On the Android platform, to use cleartext, un-encrypted, network traffic (http:// and-or ws://), include android:usesCleartextTraffic=\"true\" in the application element of the manifest as shown on developer.android.com. This is not recommended in production.

          Add the database collections to sync along with the CollectionConfiguration for each to the ReplicatorConfiguration. Multiple collections can use the same configuration, or each their own as needed. A null configuration will use the default configuration values, found in Defaults.Replicator.

          Example 2. Add Target to Configuration

          // initialize the replicator configuration\nval config = ReplicatorConfiguration(\n    URLEndpoint(\"wss://10.0.2.2:8954/travel-sample\")\n).addCollections(collections, null)\n

          Note use of the scheme prefix (wss:// to ensure TLS encryption \u2014 strongly recommended in production \u2014 or ws://)

          "},{"location":"remote-sync-gateway/#sync-mode","title":"Sync Mode","text":"

          Here we define the direction and type of replication we want to initiate.

          We use ReplicatorConfiguration class\u2019s type and isContinuous parameters, to tell the replicator:

          • The type (or direction) of the replication: PUSH_AND_PULL; PULL; PUSH
          • The replication mode, that is either of:
            • Continuous \u2014 remaining active indefinitely to replicate changed documents (isContinuous=true).
            • Ad-hoc \u2014 a one-shot replication of changed documents (isContinuous=false).

          Example 3. Configure replicator type and mode

          // Set replicator type\ntype = ReplicatorType.PUSH_AND_PULL,\n\n// Configure Sync Mode\ncontinuous = false, // default value\n

          Tip

          Unless there is a solid use-case not to, always initiate a single PUSH_AND_PULL replication rather than identical separate PUSH and PULL replications.

          This prevents the replications generating the same checkpoint docID resulting in multiple conflicts.

          "},{"location":"remote-sync-gateway/#retry-configuration","title":"Retry Configuration","text":"

          Couchbase Lite\u2019s replication retry logic assures a resilient connection.

          The replicator minimizes the chance and impact of dropped connections by maintaining a heartbeat; essentially pinging the Sync Gateway at a configurable interval to ensure the connection remains alive.

          In the event it detects a transient error, the replicator will attempt to reconnect, stopping only when the connection is re-established, or the number of retries exceeds the retry limit (9 times for a single-shot replication and unlimited for a continuous replication).

          On each retry the interval between attempts is increased exponentially (exponential backoff) up to the maximum wait time limit (5 minutes).

          The REST API provides configurable control over this replication retry logic using a set of configurable properties \u2014 see Table 1.

          Table 1. Replication Retry Configuration Properties

          Property Use cases Description setHeartbeat()
          • Reduce to detect connection errors sooner
          • Align to load-balancer or proxy keep-alive interval \u2014 see Sync Gateway\u2019s topic Load Balancer - Keep Alive
          The interval (in seconds) between the heartbeat pulses.Default: The replicator pings the Sync Gateway every 300 seconds. setMaxAttempts() Change this to limit or extend the number of retry attempts. The maximum number of retry attempts
          • Set to zero (0) to use default values
          • Set to one (1) to prevent any retry attempt
          • The retry attempt count is reset when the replicator is able to connect and replicate
          • Default values are:
            • Single-shot replication = 9;
            • Continuous replication = maximum integer value
          • Negative values generate a Couchbase exception InvalidArgumentException
          setMaxAttemptWaitTime() Change this to adjust the interval between retries. The maximum interval between retry attemptsWhile you can configure the maximum permitted wait time, the replicator\u2019s exponential backoff algorithm calculates each individual interval which is not configurable.
          • Default value: 300 seconds (5 minutes)
          • Zero sets the maximum interval between retries to the default of 300 seconds
          • 300 sets the maximum interval between retries to the default of 300 seconds
          • A negative value generates a Couchbase exception, InvalidArgumentException

          When necessary you can adjust any or all of those configurable values \u2014 see Example 4 for how to do this.

          Example 4. Configuring Replication Retries

          val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        //  other config params as required . .\n        heartbeat = 150, \n        maxAttempts = 20,\n        maxAttemptWaitTime = 600\n    )\n)\nrepl.start()\nthis.replicator = repl\n
          "},{"location":"remote-sync-gateway/#user-authorization","title":"User Authorization","text":"

          By default, Sync Gateway does not enable user authorization. This makes it easier to get up and running with synchronization.

          You can enable authorization in the sync gateway configuration file, as shown in Example 5.

          Example 5. Enable Authorization

          {\n  \"databases\": {\n    \"mydatabase\": {\n      \"users\": {\n        \"GUEST\": { \"disabled\": true }\n      }\n    }\n  }\n}\n

          To authorize with Sync Gateway, an associated user must first be created. Sync Gateway users can be created through the POST /{db}/_user endpoint on the Admin REST API.

          "},{"location":"remote-sync-gateway/#server-authentication","title":"Server Authentication","text":"

          Define the credentials your app (the client) is expecting to receive from the Sync Gateway (the server) in order to ensure it is prepared to continue with the sync.

          Note that the client cannot authenticate the server if TLS is turned off. When TLS is enabled (Sync Gateway\u2019s default) the client must authenticate the server. If the server cannot provide acceptable credentials then the connection will fail.

          Use ReplicatorConfiguration properties setAcceptOnlySelfSignedServerCertificate and setPinnedServerCertificate, to tell the replicator how to verify server-supplied TLS server certificates.

          • If there is a pinned certificate, nothing else matters, the server cert must exactly match the pinned certificate.
          • If there are no pinned certs and setAcceptOnlySelfSignedServerCertificate is true then any self-signed certificate is accepted. Certificates that are not self-signed are rejected, no matter who signed them.
          • If there are no pinned certificates and setAcceptOnlySelfSignedServerCertificate is false (default), the client validates the server\u2019s certificates against the system CA certificates. The server must supply a chain of certificates whose root is signed by one of the certificates in the system CA bundle.

          Example 6. Set Server TLS security

          CA CertSelf-Signed CertPinned Certificate

          Set the client to expect and accept only CA attested certificates.

          // Configure Server Security\n// -- only accept CA attested certs\nacceptOnlySelfSignedServerCertificate = false,\n

          This is the default. Only certificate chains with roots signed by a trusted CA are allowed. Self-signed certificates are not allowed.

          Set the client to expect and accept only self-signed certificates.

          // Configure Server Authentication --\n// only accept self-signed certs\nacceptOnlySelfSignedServerCertificate = true,\n

          Set this to true to accept any self-signed cert. Any certificates that are not self-signed are rejected.

          Set the client to expect and accept only a pinned certificate.

          // Use the pinned certificate from the byte array (cert)\npinnedServerCertificate = TLSIdentity.getIdentity(\"Our Corporate Id\")\n    ?.certs?.firstOrNull()\n    ?: throw IllegalStateException(\"Cannot find corporate id\"),\n

          Configure the pinned certificate using data from the byte array cert

          This all assumes that you have configured the Sync Gateway to provide the appropriate SSL certificates, and have included the appropriate certificate in your app bundle \u2014 for more on this see Certificate Pinning .

          "},{"location":"remote-sync-gateway/#client-authentication","title":"Client Authentication","text":"

          There are two ways to authenticate from a Couchbase Lite client: Basic Authentication or Session Authentication.

          "},{"location":"remote-sync-gateway/#basic-authentication","title":"Basic Authentication","text":"

          You can provide a username and password to the basic authenticator class method. Under the hood, the replicator will send the credentials in the first request to retrieve a SyncGatewaySession cookie and use it for all subsequent requests during the replication. This is the recommended way of using basic authentication. Example 7 shows how to initiate a one-shot replication as the user username with the password password.

          Example 7. Basic Authentication

          // Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        authenticator = BasicAuthenticator(\"username\", \"password\".toCharArray())\n    )\n)\nrepl.start()\nthis.replicator = repl\n
          "},{"location":"remote-sync-gateway/#session-authentication","title":"Session Authentication","text":"

          Session authentication is another way to authenticate with Sync Gateway.

          A user session must first be created through the POST /{db}/_session endpoint on the Public REST API.

          The HTTP response contains a session ID which can then be used to authenticate as the user it was created for.

          See Example 8, which shows how to initiate a one-shot replication with the session ID returned from the POST /{db}/_session endpoint.

          Example 8. Session Authentication

          // Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        authenticator = SessionAuthenticator(\"904ac010862f37c8dd99015a33ab5a3565fd8447\")\n    )\n)\nrepl.start()\nthis.replicator = repl\n
          "},{"location":"remote-sync-gateway/#custom-headers","title":"Custom Headers","text":"

          Custom headers can be set on the configuration object. The replicator will then include those headers in every request.

          This feature is useful in passing additional credentials, perhaps when an authentication or authorization step is being done by a proxy server (between Couchbase Lite and Sync Gateway) \u2014 see Example 9.

          Example 9. Setting custom headers

          // Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        headers = mapOf(\"CustomHeaderName\" to \"Value\")\n    )\n)\nrepl.start()\nthis.replicator = repl\n
          "},{"location":"remote-sync-gateway/#replication-filters","title":"Replication Filters","text":"

          Replication Filters allow you to have quick control over the documents stored as the result of a push and/or pull replication.

          "},{"location":"remote-sync-gateway/#push-filter","title":"Push Filter","text":"

          The push filter allows an app to push a subset of a database to the server. This can be very useful. For instance, high-priority documents could be pushed first, or documents in a \"draft\" state could be skipped.

          val collectionConfig = CollectionConfigurationFactory.newConfig(\n    pushFilter = { _, flags -> flags.contains(DocumentFlag.DELETED) }\n)\n\n// Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to collectionConfig)\n    )\n)\nrepl.start()\nthis.replicator = repl\n

          The callback should follow the semantics of a pure function. Otherwise, long-running functions would slow down the replicator considerably. Furthermore, your callback should not make assumptions about what thread it is being called on.

          "},{"location":"remote-sync-gateway/#pull-filter","title":"Pull Filter","text":"

          The pull filter gives an app the ability to validate documents being pulled, and skip ones that fail. This is an important security mechanism in a peer-to-peer topology with peers that are not fully trusted.

          Note

          Pull replication filters are not a substitute for channels. Sync Gateway channels are designed to be scalable (documents are filtered on the server) whereas a pull replication filter is applied to a document once it has been downloaded.

          val collectionConfig = CollectionConfigurationFactory.newConfig(\n    pullFilter = { document, _ -> \"draft\" == document.getString(\"type\") }\n)\n\n// Create replicator (be sure to hold a reference somewhere that will prevent the Replicator from being GCed)\nval repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to collectionConfig)\n    )\n)\nrepl.start()\nthis.replicator = repl\n

          The callback should follow the semantics of a pure function. Otherwise, long-running functions would slow down the replicator considerably. Furthermore, your callback should not make assumptions about what thread it is being called on.

          Losing access to a document via the Sync Function.

          Losing access to a document (via the Sync Function) also triggers the pull replication filter.

          Filtering out such an event would retain the document locally.

          As a result, there would be a local copy of the document disjointed from the one that resides on Couchbase Server.

          Further updates to the document stored on Couchbase Server would not be received in pull replications and further local edits could be pushed but the updated versions will not be visible.

          For more information, see Auto-purge on Channel Access Revocation.

          "},{"location":"remote-sync-gateway/#channels","title":"Channels","text":"

          By default, Couchbase Lite gets all the channels to which the configured user account has access.

          This behavior is suitable for most apps that rely on user authentication and the sync function to specify which data to pull for each user.

          Optionally, it\u2019s also possible to specify a string array of channel names on Couchbase Lite\u2019s replicator configuration object. In this case, the replication from Sync Gateway will only pull documents tagged with those channels.

          "},{"location":"remote-sync-gateway/#auto-purge-on-channel-access-revocation","title":"Auto-purge on Channel Access Revocation","text":"

          This is a Breaking Change at 3.0

          "},{"location":"remote-sync-gateway/#new-outcome","title":"New outcome","text":"

          By default, when a user loses access to a channel all documents in the channel (that do not also belong to any of the user\u2019s other channels) are auto-purged from the local database (in devices belonging to the user).

          "},{"location":"remote-sync-gateway/#prior-outcome","title":"Prior outcome","text":"

          Previously these documents remained in the local database

          Prior to CBL 3.0, CBL auto-purged only in the case when the user loses access to a document by removing the doc from all of the channels belonging to the user. Now, in addition to 2.x auto purge, Couchbase Lite also auto-purges the docs when the user loses access to the doc via channel access revocation. This feature is enabled by default, but an opt-out is available.

          "},{"location":"remote-sync-gateway/#behavior","title":"Behavior","text":"

          Users may lose access to channels in a number of ways:

          • User loses direct access to channel
          • User is removed from a role
          • A channel is removed from a role the user is assigned to

          By default, when a user loses access to a channel, the next Couchbase Lite pull replication auto-purges all documents in the channel from local Couchbase Lite databases (on devices belonging to the user) unless they belong to any of the user\u2019s other channels \u2014 see Table 2.

          Documents that exist in multiple channels belonging to the user (even if they are not actively replicating that channel) are not auto-purged unless the user loses access to all channels.

          Users will receive an ACCESS_REMOVED notification from the DocumentReplicationListener if they lose document access due to channel access revocation; this is sent regardless of the current auto-purge setting.

          Table 2. Behavior following access revocation

          System State Impact on Sync Replication Type Access Control on Sync Gateway Expected behavior when isAutoPurgeEnabled=true Pull only

          User REVOKED access to channel.

          Sync Function includes requireAccess(revokedChannel)

          Previously synced documents are auto purged on local

          Push only

          User REVOKED access to channel.

          Sync Function includes requireAccess(revokedChannel)

          No impact of auto-purge

          Documents get pushed but are rejected by Sync Gateway

          Push-pull

          User REVOKED access to channel.

          Sync Function includes requireAccess(revokedChannel)

          Previously synced documents are auto purged on Couchbase Lite.

          Local changes continue to be pushed to remote but are rejected by Sync Gateway

          If a user subsequently regains access to a lost channel, then any previously auto-purged documents still assigned to any of their channels are automatically pulled down by the active Sync Gateway when they are next updated \u2014 see behavior summary in Table 3.

          Table 3. Behavior if access is regained

          System State Impact on Sync Replication Type Access Control on Sync Gateway Expected behavior when isAutoPurgeEnabled=true Pull only User REASSIGNED access to channel

          Previously purged documents that are still in the channel are automatically pulled by Couchbase Lite when they are next updated

          Push only

          User REASSIGNED access to channel

          Sync Function includes requireAccess(reassignedChannel)

          No impact of auto-purge

          Local changes previously rejected by Sync Gateway will not be automatically pushed to remote unless resetCheckpoint is involved on CBL.

          Document changes subsequent to the channel reassignment will be pushed up as usual.

          Push-pull

          User REASSIGNED access to channel

          Sync Function includes requireAccess(reassignedChannel)

          Previously purged documents are automatically pulled by Couchbase Lite

          Local changes previously rejected by Sync Gateway will not be automatically pushed to remote unless resetCheckpoint is involved.

          Document changes subsequent to the channel reassignment will be pushed up as usual

          "},{"location":"remote-sync-gateway/#config","title":"Config","text":"

          Auto-purge behavior is controlled primarily by the ReplicationConfiguration option setAutoPurgeEnabled(). Changing the state of this will impact only future replications; the replicator will not attempt to sync revisions that were auto purged on channel access removal. Clients wishing to sync previously removed documents must use the resetCheckpoint API to resync from the start.

          Example 10. Setting auto-purge

          // set auto-purge behavior\n// (here we override default)\nenableAutoPurge = false,\n

          Here we have opted to turn off the auto purge behavior. By default auto purge is enabled.

          "},{"location":"remote-sync-gateway/#overrides","title":"Overrides","text":"

          Where necessary, clients can override the default auto-purge behavior. This can be done either by setting setAutoPurgeEnabled() to false, or for finer control by applying pull-filters \u2014 see Table 4 and Replication Filters This ensures backwards compatible with 2.8 clients that use pull filters to prevent auto purge of removed docs.

          Table 4. Impact of Pull-Filters

          purge_on_removal setting Pull Filter Not Defined Defined to filter removals/revoked docs disabled

          Doc remains in local database

          App notified of ACCESS_REMOVED if a DocumentReplicationListener is registered

          enabled (DEFAULT)

          Doc is auto purged

          App notified of ACCESS_REMOVED if DocumentReplicationListener registered

          Doc remains in local database"},{"location":"remote-sync-gateway/#delta-sync","title":"Delta Sync","text":"

          This is an Enterprise Edition feature.

          With Delta Sync, only the changed parts of a Couchbase document are replicated. This can result in significant savings in bandwidth consumption as well as throughput improvements, especially when network bandwidth is typically constrained.

          Replications to a Server (for example, a Sync Gateway, or passive listener) automatically use delta sync if the property is enabled at database level by the server \u2014 see Admin REST API delta_sync.enabled or legacy JSON configuration databases.$db.delta_sync.enabled.

          Intra-Device replications automatically disable delta sync, whilst Peer-to-Peer replications automatically enable delta sync.

          "},{"location":"remote-sync-gateway/#initialize","title":"Initialize","text":"

          In this section Start Replicator | Checkpoint Starts

          "},{"location":"remote-sync-gateway/#start-replicator","title":"Start Replicator","text":"

          Use the Replicator class\u2019s Replicator(ReplicatorConfiguration) constructor, to initialize the replicator with the configuration you have defined. You can, optionally, add a change listener (see Monitor) before starting the replicator running using start().

          Example 11. Initialize and run replicator

          // Create replicator\n// Consider holding a reference somewhere\n// to prevent the Replicator from being GCed\nval repl = Replicator( \n\n    // initialize the replicator configuration\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://listener.com:8954\"),\n\n        collections = mapOf(collections to null),\n\n        // Set replicator type\n        type = ReplicatorType.PUSH_AND_PULL,\n\n        // Configure Sync Mode\n        continuous = false, // default value\n\n        // set auto-purge behavior\n        // (here we override default)\n        enableAutoPurge = false,\n\n        // Configure Server Authentication --\n        // only accept self-signed certs\n        acceptOnlySelfSignedServerCertificate = true,\n\n        // Configure the credentials the\n        // client will provide if prompted\n        authenticator = BasicAuthenticator(\"PRIVUSER\", \"let me in\".toCharArray())\n    )\n)\n\n// Start replicator\nrepl.start(false)\n\nthis.replicator = repl\nthis.token = token\n
          1. Initialize the replicator with the configuration
          2. Start the replicator
          "},{"location":"remote-sync-gateway/#checkpoint-starts","title":"Checkpoint Starts","text":"

          Replicators use checkpoints to keep track of documents sent to the target database.

          Without checkpoints, Couchbase Lite would replicate the entire database content to the target database on each connection, even though previous replications may already have replicated some or all of that content.

          This functionality is generally not a concern to application developers. However, if you do want to force the replication to start again from zero, use the checkpoint reset argument when starting the replicator \u2014 as shown in Example 12.

          Example 12. Resetting checkpoints

          repl.start(true)\n

          Set start\u2019s reset option to true. The default false is shown above for completeness only; it is unlikely you would explicitly use it in practice.

          "},{"location":"remote-sync-gateway/#monitor","title":"Monitor","text":"

          In this section Change Listeners | Replicator Status | Monitor Document Changes | Documents Pending Push

          You can monitor a replication\u2019s status by using a combination of Change Listeners and the replicator.status.activityLevel property \u2014 see activityLevel. This enables you to know, for example, when the replication is actively transferring data and when it has stopped.

          You can also choose to monitor document changes \u2014 see Monitor Document Changes.

          "},{"location":"remote-sync-gateway/#change-listeners","title":"Change Listeners","text":"

          Use this to monitor changes and to inform on sync progress; this is an optional step. You can add a replicator change listener at any point; it will report changes from the point it is registered.

          Tip

          Don\u2019t forget to save the token so you can remove the listener later

          Use the Replicator class to add a change listener as a callback with Replicator.addChangeListener() \u2014 see Example 13. You will then be asynchronously notified of state changes.

          You can remove a change listener with ListenerToken.remove().

          "},{"location":"remote-sync-gateway/#using-kotlin-flows","title":"Using Kotlin Flows","text":"

          Kotlin developers can take advantage of Flows to monitor replicators.

          fun replChangeFlowExample(repl: Replicator): Flow<ReplicatorActivityLevel> {\n    return repl.replicatorChangesFlow()\n        .map { it.status.activityLevel }\n}\n
          "},{"location":"remote-sync-gateway/#replicator-status","title":"Replicator Status","text":"

          You can use the ReplicatorStatus class to check the replicator status. That is, whether it is actively transferring data or if it has stopped \u2014 see Example 13.

          The returned ReplicatorStatus structure comprises:

          • activityLevel \u2014 STOPPED, OFFLINE, CONNECTING, IDLE, or BUSY \u2014 see states described in Table 5
          • progress
            • completed \u2014 the total number of changes completed
            • total \u2014 the total number of changes to be processed
          • error \u2014 the current error, if any

          Example 13. Monitor replication

          Adding a Change ListenerUsing replicator.status
          val token = repl.addChangeListener { change ->\n    val err: CouchbaseLiteException? = change.status.error\n    if (err != null) {\n        println(\"Error code :: ${err.code}\\n$err\")\n    }\n}\n
          repl.status.let {\n    val progress = it.progress\n    println(\n        \"The Replicator is ${\n            it.activityLevel\n        } and has processed ${\n            progress.completed\n        } of ${progress.total} changes\"\n    )\n}\n
          "},{"location":"remote-sync-gateway/#replication-states","title":"Replication States","text":"

          Table 5 shows the different states, or activity levels, reported in the API; and the meaning of each.

          Table 5. Replicator activity levels

          State Meaning STOPPED The replication is finished or hit a fatal error. OFFLINE The replicator is offline as the remote host is unreachable. CONNECTING The replicator is connecting to the remote host. IDLE The replication caught up with all the changes available from the server. The IDLE state is only used in continuous replications. BUSY The replication is actively transferring data.

          Note

          The replication change object also has properties to track the progress (change.status.completed and change.status.total). Since the replication occurs in batches the total count can vary through the course of a replication.

          "},{"location":"remote-sync-gateway/#replication-status-and-app-life-cycle","title":"Replication Status and App Life Cycle","text":""},{"location":"remote-sync-gateway/#ios","title":"iOS","text":"

          The following diagram describes the status changes when the application starts a replication, and when the application is being backgrounded or foregrounded by the OS. It applies to iOS only.

          Additionally, on iOS, an app already in the background may be terminated. In this case, the Database and Replicator instances will be null when the app returns to the foreground. Therefore, as preventive measure, it is recommended to do a null check when the app enters the foreground, and to re-initialize the database and replicator if any of those are null.

          On other platforms, Couchbase Lite doesn\u2019t react to OS backgrounding or foregrounding events and replication(s) will continue running as long as the remote system does not terminate the connection and the app does not terminate. It is generally recommended to stop replications before going into the background otherwise socket connections may be closed by the OS and this may interfere with the replication process.

          "},{"location":"remote-sync-gateway/#other-platforms","title":"Other Platforms","text":"

          Couchbase Lite replications will continue running until the app terminates, unless the remote system, or the application, terminates the connection.

          Note

          Recall that the Android OS may kill an application without warning. You should explicitly stop replication processes when they are no longer useful (for example, when the app is in the background and the replication is IDLE) to avoid socket connections being closed by the OS, which may interfere with the replication process.

          "},{"location":"remote-sync-gateway/#monitor-document-changes","title":"Monitor Document Changes","text":"

          You can choose to register for document updates during a replication.

          For example, the code snippet in Example 14 registers a listener to monitor document replication performed by the replicator referenced by the variable repl. It prints the document ID of each document received and sent. Stop the listener as shown in Example 15.

          Example 14. Register a document listener

          val token = repl.addDocumentReplicationListener { replication ->\n    println(\"Replication type: ${if (replication.isPush) \"push\" else \"pull\"}\")\n\n    for (doc in replication.documents) {\n        println(\"Doc ID: ${doc.id}\")\n\n        doc.error?.let {\n            // There was an error\n            println(\"Error replicating document: $it\")\n            return@addDocumentReplicationListener\n        }\n\n        if (doc.flags.contains(DocumentFlag.DELETED)) {\n            println(\"Successfully replicated a deleted document\")\n        }\n    }\n}\n\nrepl.start()\nthis.replicator = repl\n

          Example 15. Stop document listener

          This code snippet shows how to stop the document listener using the token from the previous example.

          token.remove()\n
          "},{"location":"remote-sync-gateway/#document-access-removal-behavior","title":"Document Access Removal Behavior","text":"

          When access to a document is removed on Sync Gateway (see Sync Gateway\u2019s Sync Function), the document replication listener sends a notification with the ACCESS_REMOVED flag set to true and subsequently purges the document from the database.

          "},{"location":"remote-sync-gateway/#documents-pending-push","title":"Documents Pending Push","text":"

          Tip

          Replicator.isDocumentPending() is quicker and more efficient. Use it in preference to returning a list of pending document IDs, where possible.

          You can check whether documents are waiting to be pushed in any forthcoming sync by using either of the following API methods:

          • Use the Replicator.getPendingDocumentIds() method, which returns a list of document IDs that have local changes, but which have not yet been pushed to the server. This can be very useful in tracking the progress of a push sync, enabling the app to provide a visual indicator to the end user on its status, or decide when it is safe to exit.
          • Use the Replicator.isDocumentPending() method to quickly check whether an individual document is pending a push.

          Example 16. Use Pending Document ID API

          val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"ws://localhost:4984/mydatabase\"),\n        collections = mapOf(setOf(collection) to null),\n        type = ReplicatorType.PUSH\n    )\n)\n\nval pendingDocs = repl.getPendingDocumentIds()\n\n// iterate and report on previously\n// retrieved pending docIds 'list'\nif (pendingDocs.isNotEmpty()) {\n    println(\"There are ${pendingDocs.size} documents pending\")\n\n    val firstDoc = pendingDocs.first()\n    repl.addChangeListener { change ->\n        println(\"Replicator activity level is ${change.status.activityLevel}\")\n        try {\n            if (!repl.isDocumentPending(firstDoc)) {\n                println(\"Doc ID $firstDoc has been pushed\")\n            }\n        } catch (err: CouchbaseLiteException) {\n            println(\"Failed getting pending docs\\n$err\")\n        }\n    }\n\n    repl.start()\n    this.replicator = repl\n}\n
          1. Replicator.getPendingDocumentIds() returns a list of the document IDs for all documents waiting to be pushed. This is a snapshot and may have changed by the time the response is received and processed.
          2. Replicator.isDocumentPending() returns true if the document is waiting to be pushed, and false otherwise.
          "},{"location":"remote-sync-gateway/#stop","title":"Stop","text":"

          Stopping a replication is straightforward. It is done using stop(). This initiates an asynchronous operation and so is not necessarily immediate. Your app should account for this potential delay before attempting any subsequent operations.

          You can find further information on database operations in Databases.

          Example 17. Stop replicator

          // Stop replication.\nrepl.stop()\n

          Here we initiate the stopping of the replication using the stop() method. It will stop any active change listener once the replication is stopped.

          "},{"location":"remote-sync-gateway/#error-handling","title":"Error Handling","text":"

          When a replicator detects a network error it updates its status depending on the error type (permanent or temporary) and returns an appropriate HTTP error code.

          The following code snippet adds a change listener, which monitors a replication for errors and logs the returned error code.

          Example 18. Monitoring for network errors

          repl.addChangeListener { change ->\n    change.status.error?.let {\n        println(\"Error code: ${it.code}\")\n    }\n}\nrepl.start()\nthis.replicator = repl\n

          For permanent network errors (for example, 404 not found, or 401 unauthorized): Replicator will stop permanently, whether setContinuous is true or false. Of course, it sets its status to STOPPED.

          For recoverable or temporary errors: Replicator sets its status to OFFLINE, then:

          • If setContinuous=true it retries the connection indefinitely
          • If setContinuous=false (one-shot) it retries the connection a limited number of times.

          The following error codes are considered temporary by the Couchbase Lite replicator and thus will trigger a connection retry:

          • 408: Request Timeout
          • 429: Too Many Requests
          • 500: Internal Server Error
          • 502: Bad Gateway
          • 503: Service Unavailable
          • 504: Gateway Timeout
          • 1001: DNS resolution error
          "},{"location":"remote-sync-gateway/#using-kotlin-flows_1","title":"Using Kotlin Flows","text":"

          Kotlin developers can also take advantage of Flows to monitor replicators.

          scope.launch {\n    repl.replicatorChangesFlow()\n        .mapNotNull { it.status.error }\n        .collect { error ->\n            println(\"Replication error :: $error\")\n        }\n}\n
          "},{"location":"remote-sync-gateway/#load-balancers","title":"Load Balancers","text":"

          Couchbase Lite uses WebSockets as the communication protocol to transmit data. Some load balancers are not configured for WebSocket connections by default (NGINX for example); so it might be necessary to explicitly enable them in the load balancer\u2019s configuration \u2014 see Load Balancers.

          By default, the WebSocket protocol uses compression to optimize for speed and bandwidth utilization. The level of compression is set on Sync Gateway and can be tuned in the configuration file (replicator_compression).

          "},{"location":"remote-sync-gateway/#certificate-pinning","title":"Certificate Pinning","text":"

          Couchbase Lite supports certificate pinning.

          Certificate pinning is a technique that can be used by applications to \"pin\" a host to its certificate. The certificate is typically delivered to the client by an out-of-band channel and bundled with the client. In this case, Couchbase Lite uses this embedded certificate to verify the trustworthiness of the server (for example, a Sync Gateway) and no longer needs to rely on a trusted third party for that (commonly referred to as the Certificate Authority).

          For the 3.0.2. release, changes have been made to the way certificates on the host are matched:

          Prior to CBL 3.0.2 The pinned certificate was only compared with the leaf certificate of the host. This is not always suitable as leaf certificates are usually valid for shorter periods of time. CBL 3.0.2+ The pinned certificate will be compared against any certificate in the server\u2019s certificate chain.

          The following steps describe how to configure certificate pinning between Couchbase Lite and Sync Gateway:

          1. Create your own self-signed certificate with the openssl command. After completing this step, you should have 3 files: cert.pem, cert.cer, and privkey.pem.
          2. Configure Sync Gateway with the cert.pem and privkey.pem files. After completing this step, Sync Gateway is reachable over https/wss.
          3. On the Couchbase Lite side, the replication must point to a URL with the wss scheme and configured with the cert.cer file created in step 1.

          This example loads the certificate from the application sandbox, then converts it to the appropriate type to configure the replication object.

          Example 19. Cert Pinnings

          val repl = Replicator(\n    ReplicatorConfigurationFactory.newConfig(\n        target = URLEndpoint(\"wss://localhost:4984/mydatabase\"),\n        collections = mapOf(collections to null),\n        pinnedServerCertificate = PlatformUtils.getAsset(\"cert.cer\")?.readByteArray()\n    )\n)\nrepl.start()\nthis.replicator = repl\n

          Note

          PlatformUtils.getAsset() needs to be implemented in a platform-specific way \u2014 see example in Kotbase tests.

          The replication should now run successfully over https/wss with certificate pinning.

          For more on pinning certificates see the blog entry: Certificate Pinning with Couchbase Mobile.

          "},{"location":"remote-sync-gateway/#troubleshooting","title":"Troubleshooting","text":""},{"location":"remote-sync-gateway/#logs","title":"Logs","text":"

          As always, when there is a problem with replication, logging is your friend. You can increase the log output for activity related to replication with Sync Gateway \u2014 see Example 20.

          Example 20. Set logging verbosity

          Database.log.console.setDomains(LogDomain.REPLICATOR)\nDatabase.log.console.level = LogLevel.DEBUG\n

          For more on troubleshooting with logs, see Using Logs.

          "},{"location":"remote-sync-gateway/#authentication-errors","title":"Authentication Errors","text":"

          If Sync Gateway is configured with a self-signed certificate but your app points to a ws scheme instead of wss you will encounter an error with status code 11006 \u2014 see Example 21.

          Example 21. Protocol Mismatch

          CouchbaseLite Replicator ERROR: {Repl#2} Got LiteCore error: WebSocket error 1006 \"connection closed abnormally\"\n

          If Sync Gateway is configured with a self-signed certificate, and your app points to a wss scheme but the replicator configuration isn\u2019t using the certificate you will encounter an error with status code 5011 \u2014 see Example 22 .

          Example 22. Certificate Mismatch or Not Found

          CouchbaseLite Replicator ERROR: {Repl#2} Got LiteCore error: Network error 11 \"server TLS certificate is self-signed or has unknown root cert\"\n
          "},{"location":"roadmap/","title":"Roadmap","text":"
          • Documentation website (kotbase.dev)
          • NSInputStream interoperability (Okio #1123) (kotlinx-io #174)
          • Linux ARM64 support
          • Public release
          • Sample apps
            • Getting Started
            • Getting Started Compose Multiplatform
          • Couchbase Lite 3.1 API - Scopes and Collections
          • Versioned docs
          • Async coroutines API
          "},{"location":"scopes-and-collections/","title":"Scopes and Collections","text":"

          Scopes and collections allow you to organize your documents within a database.

          At a glance

          Use collections to organize your content in a database

          For example, if your database contains travel information, airport documents can be assigned to an airports collection, hotel documents can be assigned to a hotels collection, and so on.

          • Document names must be unique within their collection.

          Use scopes to group multiple collections

          Collections can be assigned to different scopes according to content-type or deployment-phase (for example, test versus production).

          • Collection names must be unique within their scope.
          "},{"location":"scopes-and-collections/#default-scopes-and-collections","title":"Default Scopes and Collections","text":"

          Every database you create contains a default scope and a default collection named _default.

          If you create a document in the database and don\u2019t specify a specific scope or collection, it is saved in the default collection, in the default scope.

          If you upgrade from a version of Couchbase Lite prior to 3.1, all existing data is automatically placed in the default scope and default collection.

          The default scope and collection cannot be dropped.

          "},{"location":"scopes-and-collections/#create-a-scope-and-collection","title":"Create a Scope and Collection","text":"

          In addition to the default scope and collection, you can create your own scope and collection when you create a document.

          Naming conventions for collections and scopes:

          • Must be between 1 and 251 characters in length.
          • Can only contain the characters A-Z, a-z, 0-9, and the symbols _, -, and %.
          • Cannot start with _ or %.
          • Scope names must be unique in databases.
          • Collection names must be unique within a scope.

          Note

          Scope and collection names are case sensitive.

          Example 1. Create a scope and collection

          // create the collection \"Verlaine\" in the default scope (\"_default\")\nvar collection1: Collection? = db.createCollection(\"Verlaine\")\n// both of these retrieve collection1 created above\ncollection1 = db.getCollection(\"Verlaine\")\ncollection1 = db.defaultScope.getCollection(\"Verlaine\")\n\n// create the collection \"Verlaine\" in the scope \"Television\"\nvar collection2: Collection? = db.createCollection(\"Television\", \"Verlaine\")\n// both of these retrieve  collection2 created above\ncollection2 = db.getCollection(\"Television\", \"Verlaine\")\ncollection2 = db.getScope(\"Television\")!!.getCollection(\"Verlaine\")\n

          In the example above, you can see that db.createCollection() can take two parameters. The first is the scope assigned to the created collection, if this parameter is omitted then a collection of the given name will be assigned to the _default scope. In this case, creating a collection called Verlaine.

          The second parameter is the name of the collection you want to create, in this case Verlaine. In the second section of the example you can see db.createCollection(\"Television\", \"Verlaine\"). This creates the collection Verlaine and then checks to see if the scope Television exists. If the scope Television exists, the collection Verlaine is assigned to the scope Television. If not, a new scope, Television, is created and then the collection Verlaine is assigned to it.

          Note

          You cannot create an empty user-defined scope. A scope is implicitly created in the db.createCollection() method.

          "},{"location":"scopes-and-collections/#index-a-collection","title":"Index a Collection","text":"

          Example 2. Index a Collection

          // Create an index named \"nameIndex1\" on the property \"lastName\" in the collection using the IndexBuilder\ncollection.createIndex(\"nameIndex1\", IndexBuilder.valueIndex(ValueIndexItem.property(\"lastName\")))\n\n// Create a similar index named \"nameIndex2\" using an IndexConfiguration\ncollection.createIndex(\"nameIndex2\", ValueIndexConfiguration(\"lastName\"))\n\n// get the names of all the indices in the collection\nval indices = collection.indexes\n\n// delete all the collection indices\nindices.forEach { collection.deleteIndex(it) }\n
          "},{"location":"scopes-and-collections/#drop-a-collection","title":"Drop a Collection","text":"

          Example 3. Drop a Collection

          db.getCollection(collectionName, scopeName)?.let {\n    db.deleteCollection(it.name, it.scope.name)\n}\n

          Note

          There is no need to drop a user-defined scope. User-defined scopes are dropped when the collections associated with them contain no documents.

          "},{"location":"scopes-and-collections/#list-scopes-and-collections","title":"List Scopes and Collections","text":"

          Example 4. List Scopes and Collections

          // List all of the collections in each of the scopes in the database\ndb.scopes.forEach { scope ->\n    println(\"Scope :: ${scope.name}\")\n    scope.collections.forEach {\n        println(\"    Collection :: ${it.name}\")\n    }\n}\n
          "},{"location":"using-logs/","title":"Using Logs","text":"

          Couchbase Lite \u2014 Using Logs for Troubleshooting

          Constraints

          The retrieval of logs from the device is out of scope of this feature.

          "},{"location":"using-logs/#introduction","title":"Introduction","text":"

          Couchbase Lite provides a robust Logging API \u2014 see API References for Log, FileLogger, and LogFileConfiguration \u2014 which make debugging and troubleshooting easier during development and in production. It delivers flexibility in terms of how logs are generated and retained, whilst also maintaining the level of logging required by Couchbase Support for investigation of issues.

          Log output is split into the following streams:

          • Console based logging You can independently configure and control console logs, which provides a convenient method of accessing diagnostic information during debugging scenarios. With console logging, you can fine-tune diagnostic output to suit specific debug scenarios, without interfering with any logging required by Couchbase Support for the investigation of issues.
          • File based logging Here logs are written to separate log files, filtered by log level, with each log level supporting individual retention policies.
          • Custom logging For greater flexibility you can implement a custom logging class using the Logger interface.

          In all instances, you control what is logged and at what level using the Log class.

          "},{"location":"using-logs/#console-based-logging","title":"Console based logging","text":"

          Console based logging is often used to facilitate troubleshooting during development.

          Console logs are your go-to resource for diagnostic information. You can easily fine-tune their diagnostic content to meet the needs of a particular debugging scenario, perhaps by increasing the verbosity and-or choosing to focus on messages from a specific domain; to better focus on the problem area.

          Changes to console logging are independent of file logging, so you can make changes without compromising any file logging streams. It is enabled by default. To change default settings use the Database.log property to set the required values \u2014 see Example 1 .

          You will primarily use log.console and ConsoleLogger to control console logging.

          Example 1. Change Console Logging Settings

          This example enables and defines console-based logging settings.

          Database.log.console.domains = LogDomain.ALL_DOMAINS\nDatabase.log.console.level = LogLevel.VERBOSE\n
          1. Define the required domain; here we turn on logging for all available domains \u2014 see ConsoleLogger.domains and enum LogDomain.
          2. Here we turn on the most verbose log level \u2014 see ConsoleLogger.level and enum LogLevel. To disable logging for the specified LogDomains set the LogLevel to NONE.
          "},{"location":"using-logs/#file-based-logging","title":"File based logging","text":"

          File based logging is disabled by default \u2014 see Example 2 for how to enable it.

          You will primarily use Log.file and FileLogger to control file-based logging.

          "},{"location":"using-logs/#formats","title":"Formats","text":"

          Available file based logging formats:

          • Binary \u2014 most efficient for storage and performance. It is the default for file based logging.<br. Use this format and a decoder, such as cbl-log, to view them \u2014 see Decoding binary logs.
          • Plaintext
          "},{"location":"using-logs/#configuration","title":"Configuration","text":"

          As with console logging you can set the log level \u2014 see the FileLogger class.

          With file based logging you can also use the LogFileConfiguration class\u2019s properties to specify the:

          • Path to the directory to store the log files
          • Log file format The default is binary. You can override that where necessary and output a plain text log.
          • Maximum number of rotated log files to keep
          • Maximum size of the log file (bytes). Once this limit is exceeded a new log file is started.

          Example 2. Enabling file logging

          Database.log.file.apply {\n    config = LogFileConfigurationFactory.newConfig(\n        directory = \"temp/cbl-logs\",\n        maxSize = 10240,\n        maxRotateCount = 5,\n        usePlainText = false\n    )\n    level = LogLevel.INFO\n}\n
          1. Set the log file directory
          2. Change the max rotation count from the default (1) to 5 Note this means six files may exist at any one time; the five rotated log files, plus the active log file
          3. Set the maximum size (bytes) for our log file
          4. Select the binary log format (included for reference only as this is the default)
          5. Increase the log output level from the default (WARNING) to INFO \u2014 see FileLogger.level

          Tip

          \"temp/cbl-logs\" might be a platform-specific location. Use expect/actual or dependency injection to provide a platform-specific log file path.

          "},{"location":"using-logs/#custom-logging","title":"Custom logging","text":"

          Couchbase Lite allows for the registration of a callback function to receive Couchbase Lite log messages, which may be logged using any external logging framework.

          To do this, apps must implement the Logger interface \u2014 see Example 3 \u2014 and enable custom logging using Log.custom \u2014 see Example 4.

          Example 3. Implementing logger interface

          Here we introduce the code that implements the Logger interface.

          class LogTestLogger(override val level: LogLevel) : Logger {\n    override fun log(level: LogLevel, domain: LogDomain, message: String) {\n        // this method will never be called if param level < this.level\n        // handle the message, for example piping it to a third party framework\n    }\n}\n

          Example 4. Enabling custom logging

          This example show how to enable the custom logger from Example 3.

          // this custom logger will not log an event with a log level < WARNING\nDatabase.log.custom = LogTestLogger(LogLevel.WARNING) \n

          Here we set the custom logger with a level of WARNING. The custom logger is called with every log and may choose to filter it, using its configured level.

          "},{"location":"using-logs/#decoding-binary-logs","title":"Decoding binary logs","text":"

          You can use the cbl-log tool to decode binary log files \u2014 see Example 5.

          Example 5. Using the cbl-log tool

          macOSCentOSWindows

          Download the cbl-log tool using wget.

          console
          wget https://packages.couchbase.com/releases/couchbase-lite-log/3.1.1/couchbase-lite-log-3.1.1-macos.zip\n

          Navigate to the bin directory and run the cbl-log executable.

          console
          ./cbl-log logcat LOGFILE <OUTPUT_PATH>\n

          Download the cbl-log tool using wget.

          console
          wget https://packages.couchbase.com/releases/couchbase-lite-log/3.1.1/couchbase-lite-log-3.1.1-centos.zip\n

          Navigate to the bin directory and run the cbl-log executable.

          console
          ./cbl-log logcat LOGFILE <OUTPUT_PATH>\n

          Download the cbl-log tool using PowerShell.

          PowerShell
          Invoke-WebRequest https://packages.couchbase.com/releases/couchbase-lite-log/3.1.1/couchbase-lite-log-3.1.1-windows.zip -OutFile couchbase-lite-log-3.1.1-windows.zip\n

          Navigate to the bin directory and run the cbl-log executable.

          PowerShell
          .\\cbl-log.exe logcat LOGFILE <OUTPUT_PATH>\n
          "}]} \ No newline at end of file diff --git a/3.1/sitemap.xml b/3.1/sitemap.xml index 29f65d7cf..4d030a4a0 100644 --- a/3.1/sitemap.xml +++ b/3.1/sitemap.xml @@ -2,177 +2,177 @@ https://kotbase.dev/current/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/active-peer/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/blobs/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/changelog/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/community/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/databases/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/differences/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/documents/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/full-text-search/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/getting-started/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/handling-data-conflicts/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/indexing/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/installation/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/integrate-custom-listener/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/intra-device-sync/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/kermit/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/kotlin-extensions/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/ktx/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/license/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/live-queries/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/n1ql-query-builder-differences/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/n1ql-query-strings/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/n1ql-server-differences/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/paging/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/passive-peer/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/peer-to-peer-sync/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/platforms/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/prebuilt-database/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/query-builder/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/query-result-sets/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/query-troubleshooting/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/remote-sync-gateway/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/roadmap/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/scopes-and-collections/ - 2024-02-01 + 2024-02-02 daily https://kotbase.dev/current/using-logs/ - 2024-02-01 + 2024-02-02 daily \ No newline at end of file diff --git a/3.1/sitemap.xml.gz b/3.1/sitemap.xml.gz index c87a31d0f74d726c466499cc4d7196f1e233a5e8..1ca5118a4c5d80f78f4454b434cc402298be7ea1 100644 GIT binary patch literal 525 zcmV+o0`mPIiwFqZSG;8c|8r?{Wo=<_E_iKh0M(dFj@vK{hWB%dzKO zdjLhE9U-#lkW?PGFO^J-blFY88U`#AGVwt&$)DQ2U+26!198Ig*1fJb-3m&;n9B6l z{rLXPeeT|emtCdIAS<0*_^msLEMNQna=ENI3Cw=sHPX~`IOq%aAI5&O{?ZLEE4`Y7 z$LN0TGKQgfVRdsv*?Wd{gmWJ%5ftfneVg(8foGX94%^N4liO_d)+ryhr)lBf%LF6A zX^1}M+pd4IJV^TAD1Qs(gQPRK1%PYpri&T}nEZZ`3v7vGFVH#eqn z1RVvlfEenm1EY9j&DjZD#W8pa2b+`=h-g`GES?0SO-S_=bE9k_;{uj3r(kv8Q;Bd@ zSgUlp(>Z%lX<(I5U?MNzLS@0(<-`KTYI#;%@=h%&fa6;UHZhMtb22Ky)TYtxG*v=r zlGSKnALVKjOp~E599V@@b$zNp$*PVhucv$`yt#c%IRZKJ+UJRxg|ymO=47Dr|FEcw zpUl?n#WUOPQyZBRyb~HOgo|pQ5oxyi4-&K_B<@#tXcO~~bZHrpfNO3onALXSfhmM_ zpMejEVm1d6%qS0$>n0WG7jN`rh0w=#RA~~911`W$TUza8bA}*xC2!@{b!Yq+^5Xu` PI30chFHjtXzZL)h_l5td literal 525 zcmV+o0`mPIiwFpu8oXr!|8r?{Wo=<_E_iKh0M(d1uG=sS$M<)Nz`N`u=+G94-_{ef z2aqM&79vXyN#*1ArIObo9lI2)7Rv-p{2*E6pE|tXmbke9QpoAp?AmR!f#k9eX*xDP zzJD{HoA>_hP%skkMw}BJo0)n2+I83KwWYwY*p=FhQ%7!=7wSHY-M0PG^luv(HNbQB z7>Af$U!#zOIb|L!LF?htxq=9ZyF*t!UO%wJOZNSKyZ>aid$Zk%%ZKg7&CNDVFe03L zZ$rEvx);lnr2CEXw@}t`xPVy!P$%FJ^FTV$o}pr%l;Y)ODJk$>C!V)#t5NCZ`Y?_l zGp80XyRyhcDc)FOGz>R31Qy+_lX3!PDGP?!B7@fnDUp1vl=a9Mm(v)7)4J~=dAJFz zRyx_~7%hu5&`QWKp=B_x5a-3jKnzK1c@|u>Ml8XBp?h*VF;9R?P%6R1Ca2p-s)SS} ztI~sg;+sw|Nrtj;pcPKk^|=5BtvVv@&hbHbH$w^00~+<(=Y^RVrP^rbB*4r6u&Aq@ z)Yk3Q66x+!E1BW^AXHokSJ6IW#zpHth)|Odn_u0bPRu{jrDj9~DzUa;THA#uCSxu` z1e$YZwK)i3$@~<#W)gvZwMuVRFk4@*DpkUf%PWwPmR3617{Re#$y=y)-7)_QdGUD2 PIqiP}OFynrzZL)h^63Qi