QORA had a model that was based on uploading data to the chain itself. In order to place data upon the QORA hosting, a user would have to register a name to use as a 'domain name' within QORA, then, utilizing transactions, upload data TO THE CHAIN. This data then was referenced by the registername, and was able to pull up HTML/js hosted within the blockchain.
Here…by 'bouncing' through a hosted QORA node, you can see an image hosted on the QORA chain… http://node6.qora.org:9090/qortal
By adding 'namestorage:qortal' instead, you can see the source of the webpage… or in this case, image… which is base58 encoded, and put onto the chain…
http://node6.qora.org:9090/namestorage:qortal Pretty neat, right?
Now, at first glance, one may say 'wow, that's incredible!', which is partially true, however, when getting down to the details and problems that are faced by using this model, one may see why Qortal decided to do away with it in favor of another method of data storage.
The first issue with the QORA model, is the fact that the data is actually ON CHAIN, which means in order to be sync'd fully with the network, every node must HOLD THE DATA.
This is good in theory, but bad in practice, when dealing with data actually on the blockchain. As when the system ends up getting a lot of users, the data would quickly become too large for the computers of the users to hold… making it eventually inevitable that the system would need a total rework once the data got to a point that it was filling the largest hard drives itself… So the long-term potential of this model is lacking.
The bloat, is the main issue that Qortal didn't use the on-chain storage model.
QORA's upload feature, was both coded incorrectly, and limited in multiple ways. For one, the coding of the upload, didn't make use of more than ONE transaction. Therefore, if users needed more data than one block could hold (which isn't a lot, and is basically assured to be not enough data for a full site of any real use) the user would have to manually structure multiple transactions with the max data in each, ensuring that the data didn't exceed the single block limit, and take their time and effort to ensure each upload fulfilled the previous upload's data to the point of being usable.
This caused all kinds of issues for users, when they attempted to upload too much data for a single tx, got an error, and assumed that websites could only be the size of a single tx which wasn't the case.
Another portion that was an issue, even if the upload DID make use of multiple transactions to upload larger pieces and such more easily.. The uploads were still done in transactions, which limits them as far as what can be uploaded, and forces everything to be more difficult, to come up with a functional end product.
That's all for now, I will continue this page at a later date.
— crowetic 2020/08/14 01:45