How bad is it, to use JavaScript (CoffeeScript) for implementing a heavy putational task? I am concerned with an optimization problem, where an optimal solution cannot be puted that fast.
JavaScript was chosen in the first place, because visualization is required and instead of adding the overhead for munication between different processes the decision was to just implement everything in JavaScript.
I don't see a problem with that, especially when looking at the benchmarks game. But I often receive the question: Why on earth JavaScript?
I would argue in the following way: It is an optimization problem, NP-hard. It does not matter how much faster another language would be, since this only adds a constant factor to the running time - is that true?
How bad is it, to use JavaScript (CoffeeScript) for implementing a heavy putational task? I am concerned with an optimization problem, where an optimal solution cannot be puted that fast.
JavaScript was chosen in the first place, because visualization is required and instead of adding the overhead for munication between different processes the decision was to just implement everything in JavaScript.
I don't see a problem with that, especially when looking at the benchmarks game. But I often receive the question: Why on earth JavaScript?
I would argue in the following way: It is an optimization problem, NP-hard. It does not matter how much faster another language would be, since this only adds a constant factor to the running time - is that true?
Share Improve this question edited May 27, 2016 at 18:26 igouy 2,69218 silver badges17 bronze badges asked Nov 30, 2011 at 0:43 Konrad ReicheKonrad Reiche 29.6k16 gold badges114 silver badges147 bronze badges 8- 1 Why JavaScript? Because it runs on every modern browser -- it is from this that it has "spilled out" to other markets. 'nough said. (If JavaScript did not have such ubiquity it would not be a pip pared to other dynamic languages... anyway, choose the right tool for the job. Fast enough is fast enough and there is a difference between performance and scalability.) – user166390 Commented Nov 30, 2011 at 0:48
- You may also want to define a heavy putational task. I would say there's no real issues choosing JS per say, unless you want to update interface on the fly with fancy movement and such. Then I would question what you're doing and why anyone would need to watch over a heavy putational task. – Kai Qing Commented Nov 30, 2011 at 0:51
- FWIW, I think that this JavaScript PC emulator is pretty cool ;-) – user166390 Commented Nov 30, 2011 at 0:52
- Start with Javascript if that's easiest / quickest, then re-code if it is unacceptable runtime. How much time are you talking about? – TJD Commented Nov 30, 2011 at 0:52
- +1 for 'using JS adds a constant factor' to the order of the problem. Although I'm not 100% sure this is true, it makes a lot of sense if it is. All I'll say is, thanks to Google we have the V8 JS engine, that makes code execution much faster. – Alex Commented Nov 30, 2011 at 1:14
4 Answers
Reset to default 2Brendan Eich (Mozilla's CTO and creator of JavaScript) seems to think so.
http://brendaneich./2011/09/capitoljs-rivertrail/
I took time away from the Mozilla all-hands last week to help out on-stage at the Intel Developer Forum with the introduction of RiverTrail, Intel’s technology demonstrator for Parallel JS — JavaScript utilizing multicore (CPU) and ultimately graphics (GPU) parallel processing power, without shared memory threads (which suck).
See especially his demo of JS creating a scene graph:
Here is my screencast of the demo. Alas, since RiverTrail currently targets the CPU and its short vector unit (SSE4), and my screencast software uses the same parallel hardware, the frame rate is not what it should be. But not to worry, we’re working on GPU targeting too.
At CapitolJS and without ScreenFlow running, I saw frame rates above 35 for the Parallel demo, pared to 3 or 2 for Sequential.
If JavaScript is working for you and meeting your requirements, what do you care what other people think?
One way to answer the question would be to benchmark it against an implementation in a "good" language (your terms, not mine) and see how much of a difference it makes.
I don't buy the visualization argument. If your "good" language implementation was municating with a front end you might be able to have faster performance and visualization. You might be overstating the cost of munication to make yourself feel better.
I also don't like your last argument. JavaScript is single threaded; another language might offer parallelism that JavaScript can't. Algorithm can make a huge difference; perhaps you've settled on one that is far from optimal.
I can tell you that no one in their right mind would consider using JavaScript for putationally intensive tasks like scientific puting. SO did have a reference to a JavaScript linear algebra library, but I doubt that it could be used for analysis of non-linear systems will millions of degrees of freedom. I don't know what kind of optimization problem you're dealing with.
With that said, I'd wonder if it's possible to treat this question fairly in a forum like this. It could lead to a lot of back and forth and argument.
Are you seeking a justification for your views or do you want alternatives? It's hard to tell.
Well it's not exactly constant time it's usually measured X times slower than Java. But, as you can see from your results for the benchmark shootout it really depends on the algorithm as to how much slower it is. This is V8 javascript so it's going to depend on the browser you are running it in as how much slower. V8 is the top performer here, but it can dramatically run slower on other VMs: ~2x-10x.
If your problem can be subdivided into parallel processors then the new Workers API can dramatically improve performance of Javascript. So it's not single threaded access anymore, and it can be really fast.
Visualization can be done from the server or from the client. If you think lots of people are going to executing your program at once you might not want to run it on the server. If one of these eats up that much processors think what 1000 of them would do to your server. With Javascript you do get a cheap parallel processor by federating all browsers. But, as far as visualization goes it could be done on the server and sent to the client as it works. It's just what you think is easier.
The only way to answer this question is to measure and evaluate those measurements as every problem and application has different needs. There is no absolute answer that covers all situations.
If you implement your app/algorithm in javascript, profile that javascript to find out where the performance bottlenecks are and optimize them as much as possible and it's still too slow for your application, then you need a different approach.
If, on the other hand, you already know that this is a massively time draining problem and even in the fastest language possible, it will still be a meaningful bottleneck to the performance of your application, then you already know that javascript is not the best choice as it will seldom (if ever) be the fastest executing option. In that case, you need to figure out if the munication between some sort of native code implementation and the browser is feasible and will perform well enough and go from there.
As for NP-hard vs. a constant factor, I think you're fooling yourself. NP-hard means you need to first make the algorithm as smart as possible so you've reduced the putation to the smallest/fastest possible problem. But, even then, the constant factor can still be massively meaningful to your application. A constant factor could easily be 2x or even 10x which would still be very meaningful even though constant. Imagine the NP-hard part was 20 seconds in native code and the constant factor for javascript was 10x slower. Now you're looking at 20 sec vs. 200 sec. That's probably the difference between something that might work for a user and something that might not.
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744847953a4596958.html
评论列表(0条)