This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |||
qort_new_data_hosting_model [10/11/2021 10:42] – [QORT New Data Hosting Model] gfactor | qort_new_data_hosting_model [10/11/2021 10:47] (current) – removed gfactor | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== QORT New Data Hosting Model ====== | ||
- | {{: | ||
- | **The Data Storage model of Qortal has yet to be fully designed. Qortal has not yet launched data hosting but we do have a demo version below and data hosting is not too far out! Basic information below (will be updated further soon).** | ||
- | |||
- | We came up with a method of data storage that could be better than using IPFS. | ||
- | |||
- | We have a working prototype already which works in a similar way to IPFS but we hope will be a lot more user friendly. It’s only for static sites at the moment (HTML, JS, CSS, images, other static assets) so you would need to either build it directly as a static site, or take a static copy of an existing site using a tool such as httrack, SiteSucker, Simply Static, etc. | ||
- | |||
- | Here’s an example site that is hosted on a Qortal data node: | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | It’s an older static copy of the Qortal wiki so it is not up to date. | ||
- | |||
- | At first, it will be more simple, since we will only be doing simple static websites, and we can convert wordpress sites to be hosted on Qortal' | ||
- | |||
- | Data hosting will essentially be a clone of Qortal, without the synchronizer and a different chain that will keep track of the data hashes. The data transmission will be done like a torrent. The more peers you have the more redundant copies you will have. You will obtain peers by obtaining ' | ||
- | |||
- | Public data is only encrypted in terms of the account that put the data there, the encryption only controls the MODIFICATION of that data. Private data, on the other hand, will be fully encrypted. We’re not entirely certain yet on how we'll decide which private data is held by other nodes. Potentially we will just have an option to allow private data to be stored as a duplicate on your node. You’ll be rewarded for that data from the fees the person putting the data up will pay for the data to be stored/ | ||
- | |||
- | Public data will be 100% FREE and use the same transaction methodology that Q-Chat uses, to put up the data hashes on the data chain. This means that Qortal web hosting and other public data functionality will be 100% feeless, and accessible to anyone (regardless of whether they hold QORT or not.) The only reason a person looking to host data on Qortal would need QORT, would be to register a name, which is a ONE TIME FEE, and the name is then owned forever (until potentially sold in the future 'names market' | ||
- | |||
- | Here’s the code being worked on so far for the data hosting: https:// | ||
- | |||
- | ===== Notes From CalDescent ===== | ||
- | |||
- | The Qortal data nodes currently work very similarly to an AWS S3 bucket (but in a decentralized way). They essentially give you an interface to add, retrieve and update static " | ||
- | |||
- | I am also in the process of building several " | ||
- | |||
- | -Static site serving (mostly finished) | ||
- | -Git repositories | ||
- | -Blog | ||
- | |||
- | The idea with these interfaces is they allow static file structures to be served and written to by the data nodes. For static sites, it's mostly just a case of serving them over a webserver, but for blogs it will be a custom plugin that renders the blog and backs the data off to JSON files - using it as the database essentially. | ||
- | |||
- | For git repositories, | ||
- | |||
- | I hadn't thought of doing a " | ||
- | |||
- | In terms of dynamic content, I haven' | ||
- | |||
- | Databases are even more difficult. I will probably start by making a MySQL interface that backs off its data to the static files. But I worry that it will be impractical for a lot of use cases, because each INSERT or UPDATE would require a new transaction to be published on chain. So it would only work for infrequent updates, such as a basic website. | ||
- | |||
- | We are limiting the amount of transactions / resource updates using a proof of work nonce, so that will stop someone from being able to spam the chain constantly, which will have the effect of limiting to only "low throughput" | ||
- | |||
- | Here's how " | ||
- | |||
- | -Add a MySQL resource and publish the initial database schema and data. This would create a transaction on chain and sync the associated data around the network. | ||
- | -Add a PHP resource and publish your initial PHP files. | ||
- | -Inside your PHP database class you would point it to the Qortal MySQL instance. This would essentially map to http:// | ||
- | -Deploy the PHP resource which would add a transaction and sync its data around the network. | ||
- | -Other users could then run the PHP app (and database) using by visiting `http:// | ||
- | -The MySQL resource would have permissions set so that it can be updated by all (or a certain group of people, or just the original creator - depending on the use case). | ||
- | -Everyone' | ||
- | |||
- | Obviously the approach above has major drawbacks for serious / high throughput use cases, and also ones that need to store sensitive data. But it would work in projects that update their data infrequently (e.g. wordpress), so it could be a fairly straightforward way to get started. Most of the foundations are in place for this already. | ||
- | |||
- | The current plan is to include a java-based version of httrack in the data nodes, so that it can automatically convert a URL and add the static copy to the data chain. After that, we'll have the ability to include a bot in the data nodes that watches a given URL for changes and then automatically creates a static copy and publishes any differences to the chain each time. I'm not sure whether this feature will make it into v1 of data nodes; it will most depend on whether there is a java-based static site downloader already available. If not we'll have to create our own, which will take a while. | ||
- | |||
- | The storage nodes will have their own API endpoints to access and update data. They' | ||
- | |||
- | ===== More Notes From CalDescent: ===== | ||
- | |||
- | |||
- | The way public data hosting works at the moment is by using confirmable transactions (so that there is always an on-chain record of updates to each website/ | ||
- | |||
- | Then, in order for the data itself to replicate, the creator must have viewers of the data, or followers of their account. This allows good data to propagate, whereas data that has no viewers or followers wouldn' | ||
- | |||
- | The idea behind using a nonce rather than a transaction fee removes the cost barrier so encourages people to use the system. But since the uploader has to perform some difficult proof of work calculations each time, it prevents someone from easily spamming the chain. | ||
- | |||
- | Also, the data hashes are being written to a completely separate chain for scalability reasons (it doesn' | ||
- | |||
- | All of the above relates to " |