Qortal Project

The future of blockchain platforms

User Tools

Site Tools


Qortal Project Wiki

QORT Project Model

Important Notices - MUST READ

QORT Minting

QORT Communications Plugin

QORT Trade Portal

QORT Voting System

QORT Data Hosting

QORT Hardware

Frequently Asked Questions (FAQ)

QORT How-To Guides


QORT Data Hosting Model

The Data Storage model of Qortal has yet to be fully designed. Qortal has not yet launched data hosting but we do have a demo version below and data hosting is not too far out!

We came up with a method of data storage that could be better than using IPFS.

We have a working prototype already which works in a similar way to IPFS but we hope will be a lot more user friendly. It’s only for static sites at the moment (HTML, JS, CSS, images, other static assets) so you would need to either build it directly as a static site, or take a static copy of an existing site using a tool such as httrack, SiteSucker, Simply Static, etc.

Here are two examples of sites hosted on a Qortal data node:



These are older static copies of the Qortal website and wiki so they may not be up to date FYI.

At first, it will be more simple, since we will only be doing simple static websites, and we can convert wordpress sites to be hosted on Qortal's infrastructure. So if you view a website, as a data node, you will become a peer for it, unless you state that you don't want to in settings. You don't have to worry about someone uploading something to another namespace, as it simply isn't possible. The only people who can modify data on a registered name's space, is the person who owns the name. Qortal's blockchain has the registered names, just like it does now for the registered names we use for 'usernames'. Registered names will just have a data location tied to them as well; 'name storage'. A Qortal data node can act as a 'proxy' from a traditional domain. So you can host a traditional domain on Qortal as well.

Data hosting will essentially be a clone of Qortal, without the synchronizer and a different chain that will keep track of the data hashes. The data transmission will be done like a torrent. The more peers you have the more redundant copies you will have. You will obtain peers by obtaining 'followers' or by another data node viewing your public data (unless that other node decides they do not want to be a host of your data.) Data nodes will have 100% control over what they host and what they don't but by default, if you're a data node, and you access another data node's public data, you will become a peer for it.

Public data is only encrypted in terms of the account that put the data there, the encryption only controls the MODIFICATION of that data. Private data, on the other hand, will be fully encrypted. We’re not entirely certain yet on how we'll decide which private data is held by other nodes. Potentially we will just have an option to allow private data to be stored as a duplicate on your node. You’ll be rewarded for that data from the fees the person putting the data up will pay for the data to be stored/duplicated on the network. So as a data node you will have the option to allow private data to be stored, then as the person wanting that data stored/duplicated, you will pay a fee for however many duplicate copies you would like to store. The fee then goes to the data node that ends up storing that duplicate copy.

Public data will be 100% FREE and use the same transaction methodology that Q-Chat uses, to put up the data hashes on the data chain. This means that Qortal web hosting and other public data functionality will be 100% feeless, and accessible to anyone (regardless of whether they hold QORT or not.) The only reason a person looking to host data on Qortal would need QORT, would be to register a name, which is a ONE TIME FEE, and the name is then owned forever (until potentially sold in the future 'names market'.)

Here’s the code being worked on so far for the data hosting: https://github.com/CalDescent14/qortal-data

Notes From CalDescent

The Qortal data nodes currently work very similarly to an AWS S3 bucket (but in a decentralized way). They essentially give you an interface to add, retrieve and update static "resources"; each resource is really just a folder structure. You could very easily take these files into a centralized app via the data node APIs and then process/execute them in whatever way you like.

I am also in the process of building several "interfaces" built on top of the static files. The current planned ones are:

  1. Static site serving (mostly finished)
  2. Git repositories
  3. Blog

The idea with these interfaces is they allow static file structures to be served and written to by the data nodes. For static sites, it's mostly just a case of serving them over a webserver, but for blogs it will be a custom plugin that renders the blog and backs the data off to JSON files - using it as the database essentially.

For git repositories, I am planning on using an open source Java git client such as Gitblit and back its data off to the static files. That way you can add your data node as a remote and pull/push to it.

I hadn't thought of doing a "static website builder", but that is a really great idea. If we can allow people to build sites within Qortal using an open source project, that would save a lot of work, and would be a really nice feature. One potential issue with Jekyll is that it's not java based, so it would be great if we could find a native java alternative. Otherwise it's complicated to bundle it with our app.

In terms of dynamic content, I haven't got too far into the details of that since we can achieve most of our plans using staticly hosted files plus "smart interfaces" on top - such as git or the website builder. One day we may be able to have, for instance, a PHP interface in which we allow the files to be executed by PHP. But it won't be too soon as the sandboxing would be non trivial - it may need virtual machines or possibly Docker to ensure the execution is fully isolated and locked down.

Databases are even more difficult. I will probably start by making a MySQL interface that backs off its data to the static files. But I worry that it will be impractical for a lot of use cases, because each INSERT or UPDATE would require a new transaction to be published on chain. So it would only work for infrequent updates, such as a basic website.

We are limiting the amount of transactions / resource updates using a proof of work nonce, so that will stop someone from being able to spam the chain constantly, which will have the effect of limiting to only "low throughput" use cases. The data nodes handle partial updates already, so it only updates the changes instead of re-uploading the entire file structure every time.

Here's how "decentralized LAMP" could work in the future:

  1. Add a MySQL resource and publish the initial database schema and data. This would create a transaction on chain and sync the associated data around the network.
  2. Add a PHP resource and publish your initial PHP files.
  3. Inside your PHP database class you would point it to the Qortal MySQL instance. This would essentially map to http://localhost:12393/mysql/MyRegisteredName
  4. Deploy the PHP resource which would add a transaction and sync its data around the network.
  5. Other users could then run the PHP app (and database) using by visiting `http://localhost:12392/MyRegisteredName`
  6. The MySQL resource would have permissions set so that it can be updated by all (or a certain group of people, or just the original creator - depending on the use case).
  7. Everyone's nodes would watch for local changes to the MySQL repo, and submit a transaction automatically when needed.

Obviously the approach above has major drawbacks for serious / high throughput use cases, and also ones that need to store sensitive data. But it would work in projects that update their data infrequently (e.g. wordpress), so it could be a fairly straightforward way to get started. Most of the foundations are in place for this already.

The current plan is to include a java-based version of httrack in the data nodes, so that it can automatically convert a URL and add the static copy to the data chain. After that, we'll have the ability to include a bot in the data nodes that watches a given URL for changes and then automatically creates a static copy and publishes any differences to the chain each time. I'm not sure whether this feature will make it into v1 of data nodes; it will most depend on whether there is a java-based static site downloader already available. If not we'll have to create our own, which will take a while.

The storage nodes will have their own API endpoints to access and update data. They'll also have the API endpoints that we're already used to, so that you can access blocks and transactions on the data chain.

More Notes From CalDescent:

The way public data hosting works at the moment is by using confirmable transactions (so that there is always an on-chain record of updates to each website/service), but rather than a transaction fee, we require a proof of work nonce to be calculated and included with the transaction. The difficulty depends on the size of the files being added to the network. We use a similar PoW nonce for Q-chat transactions.

Then, in order for the data itself to replicate, the creator must have viewers of the data, or followers of their account. This allows good data to propagate, whereas data that has no viewers or followers wouldn't be taking up space on people's nodes. It also allows node owners to control exactly what data they are hosting, by following only those creators that they want to support (similar to torrents).

The idea behind using a nonce rather than a transaction fee removes the cost barrier so encourages people to use the system. But since the uploader has to perform some difficult proof of work calculations each time, it prevents someone from easily spamming the chain.

Also, the data hashes are being written to a completely separate chain for scalability reasons (it doesn't fill up the blocks on the main chain and also allows us to have multiple independent data chains when we need to scale up). In v1, there will be no way to spend QORT on the data chain, so there is no scope for transaction fees. But in subsequent versions we will add a "bridge" to move QORT between chains. At this point, we can add features such as allowing uploaders to pay extra for more copies of their data to be stored, and we can reward data nodes for the data they are hosting.

All of the above relates to "public data" only. We will eventually have support for "private data" too, and this would work completely differently. It would likely be a case of paying QORT to host it, which would be paid out to the participating data nodes. But that feature won't be in place for a while.

qort_data_hosting_model.txt · Last modified: 2021/10/11 10:50 by gfactor