NVIDIA Integration in Gathr

Integrate your model with NVIDIA services like NVIDIA NIM (NVIDIA Inference Microservices) or NVIDIA Triton Inference Server. Add connections for these services and leverage AI models in ETL applications with the NVIDIA NIM and Triton processors to generate inferences on your prediction data.

These services allow you to deploy, manage, and scale AI models, as well as integrate them into various data tasks such as classification, information extraction, text summarization, and sentiment analysis.

Supported Services and Tools

  • NVIDIA NIM

    Version: NIM Tracking Server 2.9.2.

    NIM allows AI models deployment on Kubernetes clusters for flexible, scalable, and fault-tolerant operations.

  • NVIDIA Triton Inference Server

    Triton provides a platform for deploying AI and ML models at scale.


Steps for Integration

  1. Add Connection to NVIDIA Service.

    For NIM: Refer NVIDIA NIM Connection topic for more details.

    For Triton: Refer NVIDIA Triton Connection topic for more details.

    Refer to the NVIDIA Models Listing Page for details on listed models.

  2. Make sure that the model is in Ready state.

  3. Use NVIDIA NIM and NVIDIA Triton Processors for inference.

    See the NVIDIA NIM Processor and NVIDIA Triton Processor for configuration details for use in ETL applications.

Top