javascript - MongoDB cache collection in memory? - Stack Overflow

I am using mongoDB to store a collection of polygons and use $geoIntersects queries to find in which po

I am using mongoDB to store a collection of polygons and use $geoIntersects queries to find in which polygon a specific point is.

My mongoose Schema looks like this:

var LocationShema = mongoose.Schema({
    name: String,
    geo: {
        type: {
            type: String 
        },
        coordinates: []
    }
});

LocationShema.index({geo: '2dsphere'});

module.exports = mongoose.model('Location', LocationShema);

So, each element is a polygon. I added the 2dsphere index hoping that the queries would be faster and the entire collection would be stored in memory. Unfortunately it takes about 600ms for ~20queries which is way too much for my use case.

My queries look like this:

Location.find({
            geo: {
                $geoIntersects: {
                    $geometry: {
                        type: 'Point',
                        coordinates: [pos.lng, pos.lat]
                    }
                }
            }
 },...)

Is there anyway I can make this run faster? Can I force MongoDB to cache the entire collection in the database (as ther collection never changes). Is there any way I can check if the collection is actually stored in an in-memory cache?

Also, are there any alternatives I can use (eg: a library or something) that allows for fast geo-spatial queries?

I am using mongoDB to store a collection of polygons and use $geoIntersects queries to find in which polygon a specific point is.

My mongoose Schema looks like this:

var LocationShema = mongoose.Schema({
    name: String,
    geo: {
        type: {
            type: String 
        },
        coordinates: []
    }
});

LocationShema.index({geo: '2dsphere'});

module.exports = mongoose.model('Location', LocationShema);

So, each element is a polygon. I added the 2dsphere index hoping that the queries would be faster and the entire collection would be stored in memory. Unfortunately it takes about 600ms for ~20queries which is way too much for my use case.

My queries look like this:

Location.find({
            geo: {
                $geoIntersects: {
                    $geometry: {
                        type: 'Point',
                        coordinates: [pos.lng, pos.lat]
                    }
                }
            }
 },...)

Is there anyway I can make this run faster? Can I force MongoDB to cache the entire collection in the database (as ther collection never changes). Is there any way I can check if the collection is actually stored in an in-memory cache?

Also, are there any alternatives I can use (eg: a library or something) that allows for fast geo-spatial queries?

Share Improve this question asked Jun 14, 2016 at 13:05 XCSXCS 28.2k28 gold badges104 silver badges153 bronze badges 10
  • 1 Can you post a explain() of your query? – KRONWALLED Commented Jun 14, 2016 at 13:13
  • when executing a query in mongo shell use 'db.collectionName.find(query).explain()' – profesor79 Commented Jun 14, 2016 at 13:15
  • Hmm, but I'm using mongoose and not using the shell, I will try that, thanks. – XCS Commented Jun 14, 2016 at 13:17
  • Here is the explain: codepaste/qjaron – XCS Commented Jun 14, 2016 at 13:22
  • 1 How exactly are you running those 20 queries? Concurrently, sequentially? Can they be folded into a single query perhaps? – robertklep Commented Jun 14, 2016 at 13:27
 |  Show 5 more ments

2 Answers 2

Reset to default 2

With mongo >3.0 you can use inMemory storage, so that means you could have instance of mongo when seeded collection stays in memory (all changes aren't persisted).

From other side if your collection is static - there could be a way to implement a cache storage like Redis, or even TTL indexed collection with stored query and response.

the process of seeding could be done by backup of current collection and restore in in memory collection.

When querying frequently collection - it residues in memory as long as mongo needs to load other collections (on busy system).

Any ments wele!

You cannot keep a specific collection into memory but the others into disk, but bear in mind that mongodb internally is doing some caching when you execute the same query again and again.

What you can do it the following:

  • Option 1: You can create a "view" with $merge or $out and query this collection with a find (all). BTW some times it's better to do a full scan instead of going by indexes (in case you have many results especially if each document is big in size). So with the "view" you can go without any index - since you do a find all - and you have better changes your results to be cached. This also related how big is you RAM since maybe mongodb needs to swap data in RAM
  • Option 2: If your data are read-only you can load a mongodb pletenly in-memory (https://docs.mongodb./manual/core/inmemory/) having only this collection. But you have to find a way when the mongodb restarted to create and populate again the collection.

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745189021a4615773.html

相关推荐

  • javascript - MongoDB cache collection in memory? - Stack Overflow

    I am using mongoDB to store a collection of polygons and use $geoIntersects queries to find in which po

    3小时前
    20

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信