-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How does this differ from Spotify, DropBox etc? #6
Comments
Great Analogy and to the point correct. Just wanted to update on the same. When we design our offline model in browser way it will work as browser connects with our server. These scenario need special handling considering for scenarios:
I am looking towards an architecture where client doesn't have to handle so much of scenarios.
We are working on similar kind of architecture for our mobility accelerator framework, to tackle above issue. Hopefully if we can open-source it. |
Hi Not sure I followed all your points. But for the below statements: Need to automatically refresh data in background if server receives the fresh copy. File dependency of one over other & how to handle it? Updated record offline, without internet now after 3 hours it is getting synced. How to migrate from one version to another with seamless upgrade experience.
How to handle large dataset (how much to sync and how it should refresh such datasets)? This translates to: if you have large data blobs, make them separate files! If you have data that changes very often then do not put them in the same files as data that changes very seldom. Do not use deep nested hierarchies for your files if your leaves change often, it will invalidate all parents. How to efficiently work when you have active connection & offline data? |
Hi
I read the website. I agree that this problem have some UX challenges. But I don't see this as a particularly unexplored area, but then again I might be missing something?
Please bear with me as I expand below.
Offline first == File sync
As I see it, your problem domain becomes very similar to the challenges of a distributed version control system (git), a multi client file backup (DropBox) or even a offline music player (Spotify). These all have attacked the UX problems you mention and have a lot of ready solutions.
File sync, but we're working with JSON-data in our API:s?
If you treat your data as files in a file system, then the filename is the checksum of said data (again how git stores data).
This have two benefits:
Typically you would store this in a document or a key-value database on the backend. Your client basically holds a LRU-cache (Last Recently Used) on disk. This means that the most accessed data is always available (Spotify), but you can limit data storage to a certain amount.
How do you store and read these files?
Structuring and accessing you data becomes a bit different than the traditional model. Since you are syncing files with checksum as keys, you can't hardcode application logic to a particular REST API-call.
Luckily you don't have to search very far to find an excellent example. This one you already know about and use every day. It is called: The world wide web (or HTTP + HTML).
Yes, this means that you should view your client as a browser when exploring your data. A browser typically makes no pre-assumption as to what content an URI will contain.
Example: Web browser
Example: Your data browser
Your data model needs to be structured like a file system (a tree datastructure). Where have one or several root nodes (your index.html). The child nodes contains links to data and links to other nodes.
When persisting your data you need to end up in what would be called materialized views (like HTML-pages are). This kind of data is typically a much better fitted in Document stores than Relational databases.
Writing
If you update a file and therefore change the file name (file name == checksum, remember) you need to make sure all the links to that document are updated. You do this by changing all the nodes in that branch of the tree and lastly update the root element (which becomes your transactional boundary).
This model also solves versioning. Because you don't have to remove old data that a user needs to continue working. If you delete data you only do it at a time when no links exists to it any more (basically a Garbage Collector).
Maybe I'm making a lot of assumptions, but if this problem is in fact not new, does that not imply that there are already well known UX models we can build upon as well?
The text was updated successfully, but these errors were encountered: