We have a Apache server that provides a website. The website creates a GET-Request, that runs a C++ program on the server. The program creates a 3D-Scene and answers the Get-Request with a json containing the scene. The scene is then rendered in the browser using WebGL.
This works perfectly fine for small scenes. Chrome throws an error when the json is greater than ~125 mb. Firefox can handle jsons up to ~260 mb.
I create the Get-Request using jquery:
BP2011D1.ServerProxy.prototype.loadMesh = function(requestParameter, callbackOnSuccess, callbackOnError)
{
$.ajax({
type: "GET",
url: this.getServerURL() + "/cgi-bin/" + this._treemapDirectory + "/hpi_bp2011_app_fcgi",
data: requestParameter + "&functionName=getMesh",
dataType: "json",
success: callbackOnSuccess.execute,
error: callbackOnError.execute
});
};
For large jsons the callbackOnError is executed, so the json seems to be invalid.
I know that the json should be perfectly valid.
I think that the browser cannot handle a big json or a big string. He clips some of the characters at the end, so the missing brackets make the json invalid.
Is there a way to handle the problem? I need to handle a json up to 800 mb.
We have a Apache server that provides a website. The website creates a GET-Request, that runs a C++ program on the server. The program creates a 3D-Scene and answers the Get-Request with a json containing the scene. The scene is then rendered in the browser using WebGL.
This works perfectly fine for small scenes. Chrome throws an error when the json is greater than ~125 mb. Firefox can handle jsons up to ~260 mb.
I create the Get-Request using jquery:
BP2011D1.ServerProxy.prototype.loadMesh = function(requestParameter, callbackOnSuccess, callbackOnError)
{
$.ajax({
type: "GET",
url: this.getServerURL() + "/cgi-bin/" + this._treemapDirectory + "/hpi_bp2011_app_fcgi",
data: requestParameter + "&functionName=getMesh",
dataType: "json",
success: callbackOnSuccess.execute,
error: callbackOnError.execute
});
};
For large jsons the callbackOnError is executed, so the json seems to be invalid.
I know that the json should be perfectly valid.
I think that the browser cannot handle a big json or a big string. He clips some of the characters at the end, so the missing brackets make the json invalid.
Is there a way to handle the problem? I need to handle a json up to 800 mb.
Share Improve this question edited May 23, 2012 at 15:38 Jens Ehrlich asked May 23, 2012 at 15:05 Jens EhrlichJens Ehrlich 8732 gold badges11 silver badges19 bronze badges 4- 5 800mb of json? that's rather ludicrous... Try sending it over in multiple smaller chunks. – Marc B Commented May 23, 2012 at 15:06
- @MarcB event at large chunks, 800MB is just too massive for one time parse. The browser even chokes for several seconds looping 100k iterations, how much more 800MB data. – Joseph Commented May 23, 2012 at 15:10
- @joseph: I'm guessing this is on an internal network, so 800mb at gigabit speeds isn't too nasty, but it is definitely a HUGE hit on the browser. – Marc B Commented May 23, 2012 at 15:11
- our apache uses gzip, so the json is only ~30 mb big when transferred over the network – Jens Ehrlich Commented May 23, 2012 at 15:32
3 Answers
Reset to default 2You could try using a more pact format
http://code.google./p/webgl-loader/
You could also roll your own format and download the large parts directly in binary by using binary XHR
http://www.html5rocks./en/tutorials/file/xhr2/
You need to identify what creates the problem:
- fetching json that big
- evaluating json that big (eval())
- asking javascript to store this
I assume you fetch the json by xhr, in the first 2 cases you could try creating some pagination where you you add a GET-parameter part={0,1,2,3,4,5...}, allowing the browser to fetch a huge json in multiple xhr requests (by implementing this server-side). This would require the server to split it, and the browser to merge it:
{a:1, b:5} - split -> {a:1} and {b:5} - merge -> {a:1, b:5}
or:
[1, 5] - split -> [1] and [5] - merge -> [1, 5]
please understand that while doing this, you need to find a good place to split and merge, i.e. in this case:
{small: <1mb object>, huge:<799mb object>}
Or you might decide to just fetch string and split and merge it.
Download them by chunks (by batches of several MB) via AJAX. Assuming that you have an advanced browser (due to WebGL), you can use web workers like "threads" to parse them in the background by batch then add them to a main object for use.
For every chunk:
download -> create worker -> parse JSON in worker -> add to main object
Even if it circumvented the 800MB JSON parsing issue, I don't know the effects of an 800MB object though. It would still be heavy on the browser heap.
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744159423a4561020.html
评论列表(0条)