Update MongoDB field using value of another field












276














In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:



UPDATE Person SET Name = FirstName + ' ' + LastName


And the MongoDB pseudo-code would be:



db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );









share|improve this question




















  • 4




    Good question. Maybe you need to wait for / vote for jira.mongodb.org/browse/SERVER-458
    – Thilo
    Oct 20 '10 at 6:04






  • 2




    The precise feature request is jira.mongodb.org/browse/SERVER-11345 - still open, not yet triaged.
    – Vince Bowdren
    Feb 16 '16 at 15:15










  • @Chris, could you please revise the accepted answer? It appears my answer is outdated.
    – Niels van der Rest
    Oct 29 '18 at 8:00


















276














In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:



UPDATE Person SET Name = FirstName + ' ' + LastName


And the MongoDB pseudo-code would be:



db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );









share|improve this question




















  • 4




    Good question. Maybe you need to wait for / vote for jira.mongodb.org/browse/SERVER-458
    – Thilo
    Oct 20 '10 at 6:04






  • 2




    The precise feature request is jira.mongodb.org/browse/SERVER-11345 - still open, not yet triaged.
    – Vince Bowdren
    Feb 16 '16 at 15:15










  • @Chris, could you please revise the accepted answer? It appears my answer is outdated.
    – Niels van der Rest
    Oct 29 '18 at 8:00
















276












276








276


67





In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:



UPDATE Person SET Name = FirstName + ' ' + LastName


And the MongoDB pseudo-code would be:



db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );









share|improve this question















In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:



UPDATE Person SET Name = FirstName + ' ' + LastName


And the MongoDB pseudo-code would be:



db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );






mongodb mongodb-query aggregation-framework






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 17 '16 at 15:28









styvane

35.1k1277101




35.1k1277101










asked Oct 20 '10 at 5:22









Chris FulstowChris Fulstow

30.8k77599




30.8k77599








  • 4




    Good question. Maybe you need to wait for / vote for jira.mongodb.org/browse/SERVER-458
    – Thilo
    Oct 20 '10 at 6:04






  • 2




    The precise feature request is jira.mongodb.org/browse/SERVER-11345 - still open, not yet triaged.
    – Vince Bowdren
    Feb 16 '16 at 15:15










  • @Chris, could you please revise the accepted answer? It appears my answer is outdated.
    – Niels van der Rest
    Oct 29 '18 at 8:00
















  • 4




    Good question. Maybe you need to wait for / vote for jira.mongodb.org/browse/SERVER-458
    – Thilo
    Oct 20 '10 at 6:04






  • 2




    The precise feature request is jira.mongodb.org/browse/SERVER-11345 - still open, not yet triaged.
    – Vince Bowdren
    Feb 16 '16 at 15:15










  • @Chris, could you please revise the accepted answer? It appears my answer is outdated.
    – Niels van der Rest
    Oct 29 '18 at 8:00










4




4




Good question. Maybe you need to wait for / vote for jira.mongodb.org/browse/SERVER-458
– Thilo
Oct 20 '10 at 6:04




Good question. Maybe you need to wait for / vote for jira.mongodb.org/browse/SERVER-458
– Thilo
Oct 20 '10 at 6:04




2




2




The precise feature request is jira.mongodb.org/browse/SERVER-11345 - still open, not yet triaged.
– Vince Bowdren
Feb 16 '16 at 15:15




The precise feature request is jira.mongodb.org/browse/SERVER-11345 - still open, not yet triaged.
– Vince Bowdren
Feb 16 '16 at 15:15












@Chris, could you please revise the accepted answer? It appears my answer is outdated.
– Niels van der Rest
Oct 29 '18 at 8:00






@Chris, could you please revise the accepted answer? It appears my answer is outdated.
– Niels van der Rest
Oct 29 '18 at 8:00














6 Answers
6






active

oldest

votes


















103














Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.





Obsolete answer below



You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().






share|improve this answer



















  • 30




    Is this still valid today?
    – Christian Engel
    Jan 12 '13 at 22:08






  • 3




    @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
    – Niels van der Rest
    Jan 14 '13 at 12:28






  • 4




    Is it still valid in April 2017? Or there are already new features which can do this?
    – Kim
    Apr 26 '17 at 12:01






  • 1




    @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
    – Danziger
    May 3 '17 at 22:30






  • 8




    this is not a valid answer anymore, have a look at @styvane answer
    – Haroon Khan
    Mar 11 '18 at 18:28



















224














You should iterate through. For your specific case:



db.person.find().snapshot().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);





share|improve this answer



















  • 4




    What happens if another user has changed the document between your find() and your save()?
    – UpTheCreek
    Feb 15 '13 at 11:33






  • 3




    True, but copying between fields should not require transactions to be atomic.
    – UpTheCreek
    Feb 19 '13 at 9:25






  • 3




    It's important to notice that save() fully replaces the document. Should use update() instead.
    – EdMelo
    Mar 22 '13 at 21:44






  • 11




    How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
    – Philipp Jardas
    Aug 19 '13 at 13:34






  • 1




    +1. Wrong format for update, doesn't work as currently formulated.
    – Viktor Hedefalk
    Sep 5 '13 at 9:46



















122














The best way to do this is to use the aggregation framework to compute our new field.



MongoDB 3.4



The most efficient solution is in MongoDB 3.4 using the $addFields and the $out aggregation pipeline operators.



db.collection.aggregate(
[
{ "$addFields": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}},
{ "$out": "collection" }
]
)


Note that this does not update your collection but instead replace the existing collection or create a new one. Also for update operations that require "type casting" you will need client side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.



MongoDB 3.2 and 3.0



The way we do this is by $projecting our documents and use the $concat string aggregation operator to return the concatenated string.
we From there, you then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.



Aggregation query:



var cursor = db.collection.aggregate([ 
{ "$project": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}}
])


MongoDB 3.2 or newer



from this, you need to use the bulkWrite method.



var requests = ;
cursor.forEach(document => {
requests.push( {
'updateOne': {
'filter': { '_id': document._id },
'update': { '$set': { 'name': document.name } }
}
});
if (requests.length === 500) {
//Execute per 500 operations and re-init
db.collection.bulkWrite(requests);
requests = ;
}
});

if(requests.length > 0) {
db.collection.bulkWrite(requests);
}


MongoDB 2.6 and 3.0



From this version you need to use the now deprecated Bulk API and its associated methods.



var bulk = db.collection.initializeUnorderedBulkOp();
var count = 0;

cursor.snapshot().forEach(function(document) {
bulk.find({ '_id': document._id }).updateOne( {
'$set': { 'name': document.name }
});
count++;
if(count%500 === 0) {
// Excecute per 500 operations and re-init
bulk.execute();
bulk = db.collection.initializeUnorderedBulkOp();
}
})

// clean up queues
if(count > 0) {
bulk.execute();
}


MongoDB 2.4



cursor["result"].forEach(function(document) {
db.collection.update(
{ "_id": document._id },
{ "$set": { "name": document.name } }
);
})





share|improve this answer























  • Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
    – notbad.jpeg
    Sep 15 '16 at 16:58






  • 2




    @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
    – styvane
    Sep 15 '16 at 19:40






  • 7




    The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
    – Neil Lunn
    Jun 17 '17 at 9:39












  • It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
    – Sergey Shcherbakov
    Mar 20 '18 at 10:15






  • 2




    So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
    – Homam
    Aug 29 '18 at 5:27



















40














For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()



db.person.find().snapshot().forEach( function (hombre) {
hombre.name = hombre.firstName + ' ' + hombre.lastName;
db.person.save(hombre);
});


http://docs.mongodb.org/manual/reference/method/cursor.snapshot/






share|improve this answer

















  • 1




    What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
    – Marco
    Oct 11 '17 at 12:48






  • 3




    About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
    – ppython
    Dec 20 '17 at 14:52





















9














I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:



MongoClient.connect("...", function(err, db){
var c = db.collection('yourCollection');
var s = c.find({/* your query */}).stream();
s.on('data', function(doc){
c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
});
s.on('end', function(){
// stream can end before all your updates do if you have a lot
})
})





share|improve this answer

















  • 1




    How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
    – Nico
    Nov 21 '16 at 14:58



















2














Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.



js_query = %({
$or : [
{
'settings.mobile_notifications' : { $exists : false },
'settings.mobile_admin_notifications' : { $exists : false }
}
]
})

js_for_each = %(function(user) {
if (!user.settings.hasOwnProperty('mobile_notifications')) {
user.settings.mobile_notifications = user.settings.email_notifications;
}
if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
}
db.users.save(user);
})

js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
Mongoid::Sessions.default.command('$eval' => js)





share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f3974985%2fupdate-mongodb-field-using-value-of-another-field%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    6 Answers
    6






    active

    oldest

    votes








    6 Answers
    6






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    103














    Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.





    Obsolete answer below



    You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().






    share|improve this answer



















    • 30




      Is this still valid today?
      – Christian Engel
      Jan 12 '13 at 22:08






    • 3




      @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
      – Niels van der Rest
      Jan 14 '13 at 12:28






    • 4




      Is it still valid in April 2017? Or there are already new features which can do this?
      – Kim
      Apr 26 '17 at 12:01






    • 1




      @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
      – Danziger
      May 3 '17 at 22:30






    • 8




      this is not a valid answer anymore, have a look at @styvane answer
      – Haroon Khan
      Mar 11 '18 at 18:28
















    103














    Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.





    Obsolete answer below



    You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().






    share|improve this answer



















    • 30




      Is this still valid today?
      – Christian Engel
      Jan 12 '13 at 22:08






    • 3




      @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
      – Niels van der Rest
      Jan 14 '13 at 12:28






    • 4




      Is it still valid in April 2017? Or there are already new features which can do this?
      – Kim
      Apr 26 '17 at 12:01






    • 1




      @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
      – Danziger
      May 3 '17 at 22:30






    • 8




      this is not a valid answer anymore, have a look at @styvane answer
      – Haroon Khan
      Mar 11 '18 at 18:28














    103












    103








    103






    Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.





    Obsolete answer below



    You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().






    share|improve this answer














    Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.





    Obsolete answer below



    You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Oct 29 '18 at 7:56

























    answered Oct 20 '10 at 9:03









    Niels van der RestNiels van der Rest

    22.9k137182




    22.9k137182








    • 30




      Is this still valid today?
      – Christian Engel
      Jan 12 '13 at 22:08






    • 3




      @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
      – Niels van der Rest
      Jan 14 '13 at 12:28






    • 4




      Is it still valid in April 2017? Or there are already new features which can do this?
      – Kim
      Apr 26 '17 at 12:01






    • 1




      @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
      – Danziger
      May 3 '17 at 22:30






    • 8




      this is not a valid answer anymore, have a look at @styvane answer
      – Haroon Khan
      Mar 11 '18 at 18:28














    • 30




      Is this still valid today?
      – Christian Engel
      Jan 12 '13 at 22:08






    • 3




      @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
      – Niels van der Rest
      Jan 14 '13 at 12:28






    • 4




      Is it still valid in April 2017? Or there are already new features which can do this?
      – Kim
      Apr 26 '17 at 12:01






    • 1




      @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
      – Danziger
      May 3 '17 at 22:30






    • 8




      this is not a valid answer anymore, have a look at @styvane answer
      – Haroon Khan
      Mar 11 '18 at 18:28








    30




    30




    Is this still valid today?
    – Christian Engel
    Jan 12 '13 at 22:08




    Is this still valid today?
    – Christian Engel
    Jan 12 '13 at 22:08




    3




    3




    @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
    – Niels van der Rest
    Jan 14 '13 at 12:28




    @ChristianEngel: It appears so. I wasn't able to find anything in the MongoDB docs that mentions a reference to the current document in an update operation. This related feature request is still unresolved as well.
    – Niels van der Rest
    Jan 14 '13 at 12:28




    4




    4




    Is it still valid in April 2017? Or there are already new features which can do this?
    – Kim
    Apr 26 '17 at 12:01




    Is it still valid in April 2017? Or there are already new features which can do this?
    – Kim
    Apr 26 '17 at 12:01




    1




    1




    @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
    – Danziger
    May 3 '17 at 22:30




    @Kim It looks like it is still valid. Also, the feature request that @niels-van-der-rest pointed out back in 2013 is still in OPEN.
    – Danziger
    May 3 '17 at 22:30




    8




    8




    this is not a valid answer anymore, have a look at @styvane answer
    – Haroon Khan
    Mar 11 '18 at 18:28




    this is not a valid answer anymore, have a look at @styvane answer
    – Haroon Khan
    Mar 11 '18 at 18:28













    224














    You should iterate through. For your specific case:



    db.person.find().snapshot().forEach(
    function (elem) {
    db.person.update(
    {
    _id: elem._id
    },
    {
    $set: {
    name: elem.firstname + ' ' + elem.lastname
    }
    }
    );
    }
    );





    share|improve this answer



















    • 4




      What happens if another user has changed the document between your find() and your save()?
      – UpTheCreek
      Feb 15 '13 at 11:33






    • 3




      True, but copying between fields should not require transactions to be atomic.
      – UpTheCreek
      Feb 19 '13 at 9:25






    • 3




      It's important to notice that save() fully replaces the document. Should use update() instead.
      – EdMelo
      Mar 22 '13 at 21:44






    • 11




      How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
      – Philipp Jardas
      Aug 19 '13 at 13:34






    • 1




      +1. Wrong format for update, doesn't work as currently formulated.
      – Viktor Hedefalk
      Sep 5 '13 at 9:46
















    224














    You should iterate through. For your specific case:



    db.person.find().snapshot().forEach(
    function (elem) {
    db.person.update(
    {
    _id: elem._id
    },
    {
    $set: {
    name: elem.firstname + ' ' + elem.lastname
    }
    }
    );
    }
    );





    share|improve this answer



















    • 4




      What happens if another user has changed the document between your find() and your save()?
      – UpTheCreek
      Feb 15 '13 at 11:33






    • 3




      True, but copying between fields should not require transactions to be atomic.
      – UpTheCreek
      Feb 19 '13 at 9:25






    • 3




      It's important to notice that save() fully replaces the document. Should use update() instead.
      – EdMelo
      Mar 22 '13 at 21:44






    • 11




      How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
      – Philipp Jardas
      Aug 19 '13 at 13:34






    • 1




      +1. Wrong format for update, doesn't work as currently formulated.
      – Viktor Hedefalk
      Sep 5 '13 at 9:46














    224












    224








    224






    You should iterate through. For your specific case:



    db.person.find().snapshot().forEach(
    function (elem) {
    db.person.update(
    {
    _id: elem._id
    },
    {
    $set: {
    name: elem.firstname + ' ' + elem.lastname
    }
    }
    );
    }
    );





    share|improve this answer














    You should iterate through. For your specific case:



    db.person.find().snapshot().forEach(
    function (elem) {
    db.person.update(
    {
    _id: elem._id
    },
    {
    $set: {
    name: elem.firstname + ' ' + elem.lastname
    }
    }
    );
    }
    );






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Oct 28 '15 at 5:24









    evandrix

    4,75622329




    4,75622329










    answered Jan 20 '13 at 9:17









    Carlos BarcelonaCarlos Barcelona

    4,40732538




    4,40732538








    • 4




      What happens if another user has changed the document between your find() and your save()?
      – UpTheCreek
      Feb 15 '13 at 11:33






    • 3




      True, but copying between fields should not require transactions to be atomic.
      – UpTheCreek
      Feb 19 '13 at 9:25






    • 3




      It's important to notice that save() fully replaces the document. Should use update() instead.
      – EdMelo
      Mar 22 '13 at 21:44






    • 11




      How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
      – Philipp Jardas
      Aug 19 '13 at 13:34






    • 1




      +1. Wrong format for update, doesn't work as currently formulated.
      – Viktor Hedefalk
      Sep 5 '13 at 9:46














    • 4




      What happens if another user has changed the document between your find() and your save()?
      – UpTheCreek
      Feb 15 '13 at 11:33






    • 3




      True, but copying between fields should not require transactions to be atomic.
      – UpTheCreek
      Feb 19 '13 at 9:25






    • 3




      It's important to notice that save() fully replaces the document. Should use update() instead.
      – EdMelo
      Mar 22 '13 at 21:44






    • 11




      How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
      – Philipp Jardas
      Aug 19 '13 at 13:34






    • 1




      +1. Wrong format for update, doesn't work as currently formulated.
      – Viktor Hedefalk
      Sep 5 '13 at 9:46








    4




    4




    What happens if another user has changed the document between your find() and your save()?
    – UpTheCreek
    Feb 15 '13 at 11:33




    What happens if another user has changed the document between your find() and your save()?
    – UpTheCreek
    Feb 15 '13 at 11:33




    3




    3




    True, but copying between fields should not require transactions to be atomic.
    – UpTheCreek
    Feb 19 '13 at 9:25




    True, but copying between fields should not require transactions to be atomic.
    – UpTheCreek
    Feb 19 '13 at 9:25




    3




    3




    It's important to notice that save() fully replaces the document. Should use update() instead.
    – EdMelo
    Mar 22 '13 at 21:44




    It's important to notice that save() fully replaces the document. Should use update() instead.
    – EdMelo
    Mar 22 '13 at 21:44




    11




    11




    How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
    – Philipp Jardas
    Aug 19 '13 at 13:34




    How about db.person.update( { _id: elem._id }, { $set: { name: elem.firstname + ' ' + elem.lastname } } );
    – Philipp Jardas
    Aug 19 '13 at 13:34




    1




    1




    +1. Wrong format for update, doesn't work as currently formulated.
    – Viktor Hedefalk
    Sep 5 '13 at 9:46




    +1. Wrong format for update, doesn't work as currently formulated.
    – Viktor Hedefalk
    Sep 5 '13 at 9:46











    122














    The best way to do this is to use the aggregation framework to compute our new field.



    MongoDB 3.4



    The most efficient solution is in MongoDB 3.4 using the $addFields and the $out aggregation pipeline operators.



    db.collection.aggregate(
    [
    { "$addFields": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }},
    { "$out": "collection" }
    ]
    )


    Note that this does not update your collection but instead replace the existing collection or create a new one. Also for update operations that require "type casting" you will need client side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.



    MongoDB 3.2 and 3.0



    The way we do this is by $projecting our documents and use the $concat string aggregation operator to return the concatenated string.
    we From there, you then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.



    Aggregation query:



    var cursor = db.collection.aggregate([ 
    { "$project": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }}
    ])


    MongoDB 3.2 or newer



    from this, you need to use the bulkWrite method.



    var requests = ;
    cursor.forEach(document => {
    requests.push( {
    'updateOne': {
    'filter': { '_id': document._id },
    'update': { '$set': { 'name': document.name } }
    }
    });
    if (requests.length === 500) {
    //Execute per 500 operations and re-init
    db.collection.bulkWrite(requests);
    requests = ;
    }
    });

    if(requests.length > 0) {
    db.collection.bulkWrite(requests);
    }


    MongoDB 2.6 and 3.0



    From this version you need to use the now deprecated Bulk API and its associated methods.



    var bulk = db.collection.initializeUnorderedBulkOp();
    var count = 0;

    cursor.snapshot().forEach(function(document) {
    bulk.find({ '_id': document._id }).updateOne( {
    '$set': { 'name': document.name }
    });
    count++;
    if(count%500 === 0) {
    // Excecute per 500 operations and re-init
    bulk.execute();
    bulk = db.collection.initializeUnorderedBulkOp();
    }
    })

    // clean up queues
    if(count > 0) {
    bulk.execute();
    }


    MongoDB 2.4



    cursor["result"].forEach(function(document) {
    db.collection.update(
    { "_id": document._id },
    { "$set": { "name": document.name } }
    );
    })





    share|improve this answer























    • Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
      – notbad.jpeg
      Sep 15 '16 at 16:58






    • 2




      @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
      – styvane
      Sep 15 '16 at 19:40






    • 7




      The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
      – Neil Lunn
      Jun 17 '17 at 9:39












    • It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
      – Sergey Shcherbakov
      Mar 20 '18 at 10:15






    • 2




      So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
      – Homam
      Aug 29 '18 at 5:27
















    122














    The best way to do this is to use the aggregation framework to compute our new field.



    MongoDB 3.4



    The most efficient solution is in MongoDB 3.4 using the $addFields and the $out aggregation pipeline operators.



    db.collection.aggregate(
    [
    { "$addFields": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }},
    { "$out": "collection" }
    ]
    )


    Note that this does not update your collection but instead replace the existing collection or create a new one. Also for update operations that require "type casting" you will need client side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.



    MongoDB 3.2 and 3.0



    The way we do this is by $projecting our documents and use the $concat string aggregation operator to return the concatenated string.
    we From there, you then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.



    Aggregation query:



    var cursor = db.collection.aggregate([ 
    { "$project": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }}
    ])


    MongoDB 3.2 or newer



    from this, you need to use the bulkWrite method.



    var requests = ;
    cursor.forEach(document => {
    requests.push( {
    'updateOne': {
    'filter': { '_id': document._id },
    'update': { '$set': { 'name': document.name } }
    }
    });
    if (requests.length === 500) {
    //Execute per 500 operations and re-init
    db.collection.bulkWrite(requests);
    requests = ;
    }
    });

    if(requests.length > 0) {
    db.collection.bulkWrite(requests);
    }


    MongoDB 2.6 and 3.0



    From this version you need to use the now deprecated Bulk API and its associated methods.



    var bulk = db.collection.initializeUnorderedBulkOp();
    var count = 0;

    cursor.snapshot().forEach(function(document) {
    bulk.find({ '_id': document._id }).updateOne( {
    '$set': { 'name': document.name }
    });
    count++;
    if(count%500 === 0) {
    // Excecute per 500 operations and re-init
    bulk.execute();
    bulk = db.collection.initializeUnorderedBulkOp();
    }
    })

    // clean up queues
    if(count > 0) {
    bulk.execute();
    }


    MongoDB 2.4



    cursor["result"].forEach(function(document) {
    db.collection.update(
    { "_id": document._id },
    { "$set": { "name": document.name } }
    );
    })





    share|improve this answer























    • Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
      – notbad.jpeg
      Sep 15 '16 at 16:58






    • 2




      @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
      – styvane
      Sep 15 '16 at 19:40






    • 7




      The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
      – Neil Lunn
      Jun 17 '17 at 9:39












    • It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
      – Sergey Shcherbakov
      Mar 20 '18 at 10:15






    • 2




      So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
      – Homam
      Aug 29 '18 at 5:27














    122












    122








    122






    The best way to do this is to use the aggregation framework to compute our new field.



    MongoDB 3.4



    The most efficient solution is in MongoDB 3.4 using the $addFields and the $out aggregation pipeline operators.



    db.collection.aggregate(
    [
    { "$addFields": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }},
    { "$out": "collection" }
    ]
    )


    Note that this does not update your collection but instead replace the existing collection or create a new one. Also for update operations that require "type casting" you will need client side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.



    MongoDB 3.2 and 3.0



    The way we do this is by $projecting our documents and use the $concat string aggregation operator to return the concatenated string.
    we From there, you then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.



    Aggregation query:



    var cursor = db.collection.aggregate([ 
    { "$project": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }}
    ])


    MongoDB 3.2 or newer



    from this, you need to use the bulkWrite method.



    var requests = ;
    cursor.forEach(document => {
    requests.push( {
    'updateOne': {
    'filter': { '_id': document._id },
    'update': { '$set': { 'name': document.name } }
    }
    });
    if (requests.length === 500) {
    //Execute per 500 operations and re-init
    db.collection.bulkWrite(requests);
    requests = ;
    }
    });

    if(requests.length > 0) {
    db.collection.bulkWrite(requests);
    }


    MongoDB 2.6 and 3.0



    From this version you need to use the now deprecated Bulk API and its associated methods.



    var bulk = db.collection.initializeUnorderedBulkOp();
    var count = 0;

    cursor.snapshot().forEach(function(document) {
    bulk.find({ '_id': document._id }).updateOne( {
    '$set': { 'name': document.name }
    });
    count++;
    if(count%500 === 0) {
    // Excecute per 500 operations and re-init
    bulk.execute();
    bulk = db.collection.initializeUnorderedBulkOp();
    }
    })

    // clean up queues
    if(count > 0) {
    bulk.execute();
    }


    MongoDB 2.4



    cursor["result"].forEach(function(document) {
    db.collection.update(
    { "_id": document._id },
    { "$set": { "name": document.name } }
    );
    })





    share|improve this answer














    The best way to do this is to use the aggregation framework to compute our new field.



    MongoDB 3.4



    The most efficient solution is in MongoDB 3.4 using the $addFields and the $out aggregation pipeline operators.



    db.collection.aggregate(
    [
    { "$addFields": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }},
    { "$out": "collection" }
    ]
    )


    Note that this does not update your collection but instead replace the existing collection or create a new one. Also for update operations that require "type casting" you will need client side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.



    MongoDB 3.2 and 3.0



    The way we do this is by $projecting our documents and use the $concat string aggregation operator to return the concatenated string.
    we From there, you then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.



    Aggregation query:



    var cursor = db.collection.aggregate([ 
    { "$project": {
    "name": { "$concat": [ "$firstName", " ", "$lastName" ] }
    }}
    ])


    MongoDB 3.2 or newer



    from this, you need to use the bulkWrite method.



    var requests = ;
    cursor.forEach(document => {
    requests.push( {
    'updateOne': {
    'filter': { '_id': document._id },
    'update': { '$set': { 'name': document.name } }
    }
    });
    if (requests.length === 500) {
    //Execute per 500 operations and re-init
    db.collection.bulkWrite(requests);
    requests = ;
    }
    });

    if(requests.length > 0) {
    db.collection.bulkWrite(requests);
    }


    MongoDB 2.6 and 3.0



    From this version you need to use the now deprecated Bulk API and its associated methods.



    var bulk = db.collection.initializeUnorderedBulkOp();
    var count = 0;

    cursor.snapshot().forEach(function(document) {
    bulk.find({ '_id': document._id }).updateOne( {
    '$set': { 'name': document.name }
    });
    count++;
    if(count%500 === 0) {
    // Excecute per 500 operations and re-init
    bulk.execute();
    bulk = db.collection.initializeUnorderedBulkOp();
    }
    })

    // clean up queues
    if(count > 0) {
    bulk.execute();
    }


    MongoDB 2.4



    cursor["result"].forEach(function(document) {
    db.collection.update(
    { "_id": document._id },
    { "$set": { "name": document.name } }
    );
    })






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jun 24 '17 at 18:29

























    answered May 17 '16 at 15:27









    styvanestyvane

    35.1k1277101




    35.1k1277101












    • Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
      – notbad.jpeg
      Sep 15 '16 at 16:58






    • 2




      @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
      – styvane
      Sep 15 '16 at 19:40






    • 7




      The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
      – Neil Lunn
      Jun 17 '17 at 9:39












    • It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
      – Sergey Shcherbakov
      Mar 20 '18 at 10:15






    • 2




      So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
      – Homam
      Aug 29 '18 at 5:27


















    • Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
      – notbad.jpeg
      Sep 15 '16 at 16:58






    • 2




      @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
      – styvane
      Sep 15 '16 at 19:40






    • 7




      The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
      – Neil Lunn
      Jun 17 '17 at 9:39












    • It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
      – Sergey Shcherbakov
      Mar 20 '18 at 10:15






    • 2




      So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
      – Homam
      Aug 29 '18 at 5:27
















    Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
    – notbad.jpeg
    Sep 15 '16 at 16:58




    Great answer. Just wondering, is calling .length every iteration in mongo is as slow as regular javascript, where it recalculates the length on every call?
    – notbad.jpeg
    Sep 15 '16 at 16:58




    2




    2




    @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
    – styvane
    Sep 15 '16 at 19:40




    @notbad.jpeg I can say whether it is slow or not but the length property is check at each iteration. This is something I will need to check later. Another option if that is slow is to use a counter which you then increment by 1 at each iteration.
    – styvane
    Sep 15 '16 at 19:40




    7




    7




    The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
    – Neil Lunn
    Jun 17 '17 at 9:39






    The answer does well to summarize approaches and I know it's formed addressing the specific update request mentioned in the question, however one small niggle is too many people are jumping to the aggregation method. This really needs a BOLD disclaimer that this in fact 1. Creates a new collection rather than updating the existing one. 2. Needs to be avoided when "casting types". I.E The common mistake of storing "strings" instead of Date and needing to convert. I for one would be very happy if this was prominent, and not just a little comment tacked on the end.
    – Neil Lunn
    Jun 17 '17 at 9:39














    It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
    – Sergey Shcherbakov
    Mar 20 '18 at 10:15




    It seems that the aggregate() approach is not fully equivalent to what update() does, because the records that do not get touched by the filter/$match would be missing in the target collection and disappear from the original if it is specified as target.
    – Sergey Shcherbakov
    Mar 20 '18 at 10:15




    2




    2




    So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
    – Homam
    Aug 29 '18 at 5:27




    So, still we cannot refer to a document by itself in MongoDB? I don't want to create a new collection just to add one column!
    – Homam
    Aug 29 '18 at 5:27











    40














    For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()



    db.person.find().snapshot().forEach( function (hombre) {
    hombre.name = hombre.firstName + ' ' + hombre.lastName;
    db.person.save(hombre);
    });


    http://docs.mongodb.org/manual/reference/method/cursor.snapshot/






    share|improve this answer

















    • 1




      What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
      – Marco
      Oct 11 '17 at 12:48






    • 3




      About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
      – ppython
      Dec 20 '17 at 14:52


















    40














    For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()



    db.person.find().snapshot().forEach( function (hombre) {
    hombre.name = hombre.firstName + ' ' + hombre.lastName;
    db.person.save(hombre);
    });


    http://docs.mongodb.org/manual/reference/method/cursor.snapshot/






    share|improve this answer

















    • 1




      What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
      – Marco
      Oct 11 '17 at 12:48






    • 3




      About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
      – ppython
      Dec 20 '17 at 14:52
















    40












    40








    40






    For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()



    db.person.find().snapshot().forEach( function (hombre) {
    hombre.name = hombre.firstName + ' ' + hombre.lastName;
    db.person.save(hombre);
    });


    http://docs.mongodb.org/manual/reference/method/cursor.snapshot/






    share|improve this answer












    For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()



    db.person.find().snapshot().forEach( function (hombre) {
    hombre.name = hombre.firstName + ' ' + hombre.lastName;
    db.person.save(hombre);
    });


    http://docs.mongodb.org/manual/reference/method/cursor.snapshot/







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Feb 11 '15 at 16:58









    Eric KigathiEric Kigathi

    1,1771419




    1,1771419








    • 1




      What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
      – Marco
      Oct 11 '17 at 12:48






    • 3




      About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
      – ppython
      Dec 20 '17 at 14:52
















    • 1




      What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
      – Marco
      Oct 11 '17 at 12:48






    • 3




      About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
      – ppython
      Dec 20 '17 at 14:52










    1




    1




    What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
    – Marco
    Oct 11 '17 at 12:48




    What happens if another user edited the person between the find() and save()? I have a case where multiple calls can be done to the same object changing them based on their current values. The 2nd user should have to wait with reading until the 1st is done with saving. Does this accomplish that?
    – Marco
    Oct 11 '17 at 12:48




    3




    3




    About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
    – ppython
    Dec 20 '17 at 14:52






    About the snapshot(): Deprecated in the mongo Shell since v3.2. Starting in v3.2, the $snapshot operator is deprecated in the mongo shell. In the mongo shell, use cursor.snapshot() instead. link
    – ppython
    Dec 20 '17 at 14:52













    9














    I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:



    MongoClient.connect("...", function(err, db){
    var c = db.collection('yourCollection');
    var s = c.find({/* your query */}).stream();
    s.on('data', function(doc){
    c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
    });
    s.on('end', function(){
    // stream can end before all your updates do if you have a lot
    })
    })





    share|improve this answer

















    • 1




      How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
      – Nico
      Nov 21 '16 at 14:58
















    9














    I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:



    MongoClient.connect("...", function(err, db){
    var c = db.collection('yourCollection');
    var s = c.find({/* your query */}).stream();
    s.on('data', function(doc){
    c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
    });
    s.on('end', function(){
    // stream can end before all your updates do if you have a lot
    })
    })





    share|improve this answer

















    • 1




      How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
      – Nico
      Nov 21 '16 at 14:58














    9












    9








    9






    I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:



    MongoClient.connect("...", function(err, db){
    var c = db.collection('yourCollection');
    var s = c.find({/* your query */}).stream();
    s.on('data', function(doc){
    c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
    });
    s.on('end', function(){
    // stream can end before all your updates do if you have a lot
    })
    })





    share|improve this answer












    I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:



    MongoClient.connect("...", function(err, db){
    var c = db.collection('yourCollection');
    var s = c.find({/* your query */}).stream();
    s.on('data', function(doc){
    c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
    });
    s.on('end', function(){
    // stream can end before all your updates do if you have a lot
    })
    })






    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Apr 3 '15 at 11:44









    Chris GibbChris Gibb

    50758




    50758








    • 1




      How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
      – Nico
      Nov 21 '16 at 14:58














    • 1




      How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
      – Nico
      Nov 21 '16 at 14:58








    1




    1




    How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
    – Nico
    Nov 21 '16 at 14:58




    How is this different? Will the steam be throttled by the update activity? Do you have any reference to it? The Mongo docs are quite poor.
    – Nico
    Nov 21 '16 at 14:58











    2














    Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.



    js_query = %({
    $or : [
    {
    'settings.mobile_notifications' : { $exists : false },
    'settings.mobile_admin_notifications' : { $exists : false }
    }
    ]
    })

    js_for_each = %(function(user) {
    if (!user.settings.hasOwnProperty('mobile_notifications')) {
    user.settings.mobile_notifications = user.settings.email_notifications;
    }
    if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
    user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
    }
    db.users.save(user);
    })

    js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
    Mongoid::Sessions.default.command('$eval' => js)





    share|improve this answer


























      2














      Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.



      js_query = %({
      $or : [
      {
      'settings.mobile_notifications' : { $exists : false },
      'settings.mobile_admin_notifications' : { $exists : false }
      }
      ]
      })

      js_for_each = %(function(user) {
      if (!user.settings.hasOwnProperty('mobile_notifications')) {
      user.settings.mobile_notifications = user.settings.email_notifications;
      }
      if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
      user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
      }
      db.users.save(user);
      })

      js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
      Mongoid::Sessions.default.command('$eval' => js)





      share|improve this answer
























        2












        2








        2






        Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.



        js_query = %({
        $or : [
        {
        'settings.mobile_notifications' : { $exists : false },
        'settings.mobile_admin_notifications' : { $exists : false }
        }
        ]
        })

        js_for_each = %(function(user) {
        if (!user.settings.hasOwnProperty('mobile_notifications')) {
        user.settings.mobile_notifications = user.settings.email_notifications;
        }
        if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
        user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
        }
        db.users.save(user);
        })

        js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
        Mongoid::Sessions.default.command('$eval' => js)





        share|improve this answer












        Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.



        js_query = %({
        $or : [
        {
        'settings.mobile_notifications' : { $exists : false },
        'settings.mobile_admin_notifications' : { $exists : false }
        }
        ]
        })

        js_for_each = %(function(user) {
        if (!user.settings.hasOwnProperty('mobile_notifications')) {
        user.settings.mobile_notifications = user.settings.email_notifications;
        }
        if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
        user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
        }
        db.users.save(user);
        })

        js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
        Mongoid::Sessions.default.command('$eval' => js)






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jun 8 '16 at 15:07









        Chris BloomChris Bloom

        2,45712641




        2,45712641






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f3974985%2fupdate-mongodb-field-using-value-of-another-field%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Full-time equivalent

            さくらももこ

            13 indicted, 8 arrested in Calif. drug cartel investigation