Mongo Emitter
In this article
To add Mongo emitter to your pipeline, drag the emitter onto the canvas, connect it to a Data Source or processor, and click on it to configure.
Mongo Emitter Configuration
| Field | Description | 
|---|---|
| Connection Name | Select a connection name out of the list of saved connections from the drop-down. | 
| Database Name | Select the database name to write data. | 
| Collection Name | Select the name of the database collection that needs to be scanned should be selected. | 
| Output Fields | Select the fields from the drop-down list that needs to be included in the output data. | 
| Extended BSON Types | This option is checked by default to enable the extended BSON types while writing the data to Mongo DB emitter. | 
| Replace Document | This options is checked by default to replace the document when saving datasets that contain an _id field. If unchecked, it will only update the fields in the document that match the fields in the dataset. | 
| Local Threshold | Provide the threshold value (in milliseconds) for choosing a server from multiple Mongo DB servers. | 
| Max Batch Size | The maximum batch size for bulk operations when saving data. The default value provided is 512. | 
| Write Concern W | The w option request for an acknowledgment that the write operation has propagated to a specified number of mongod instances or to mongod instances with specified tags. | 
| Write Concern Timeout | Specify a wtimeout value (in milliseconds) so the query can timeout if the write concern can’t be enforced. Applicable for values > 1. | 
| Shard Key | Provide value for Shard Key. MongoDB partitions data in the collection using ranges of shard key values. The field should be indexed and contain unique values. | 
| Force Insert | Check the option to enable Force Insert to save inserts even if the datasets contains _IDs. | 
| Ordered | This option is checked by default to allow setting the bulk operations ordered property. | 
| Save Mode | Save Mode is used to specify the expected behavior of saving data to a data sink. ErrorifExist: When persisting data, if the data already exists, an exception is expected to be thrown. Append: When persisting data, if data/table already exists, contents of the Schema are expected to be appended to existing data. Overwrite: When persisting data, if data/table already exists, existing data is expected to be overwritten by the contents of the Data. Ignore: When persisting data, if data/table already exists, the save operation is expected to not save the contents of the Data and to not change the existing data. This is similar to a CREATE TABLE IF NOT EXISTS in SQL. | 
| ADD CONFIGURATION | Further configurations can be added. | 
If you have any feedback on Gathr documentation, please email us!