One of the fastest pattersn for pushing data into MSSQL that we've found is using bulk copy. The process we use is bulk copy to push data into a temp table, then execute a stored proc to migrate the information into permanent tables.

I've typically used this pattern in C# where there is built-in connection pooling. You also have direct control over when connections are "closed" by properly disposing the connection.

Similar to this:

using(var conn = new SqlConnection("some-connection-string"))
{
  // open our connection there
  conn.Open();
  
  // create temp table
  var cmd = new SqlCommand("create table #temp (id int not null)");
  cmd.ExecuteNonQuery();

  // bulk insert
  var bulk = new SqlBulkCopy(conn);
  bulk.DestinationTableName = "#test";
  bulk.WriteToServer(someDataTable);
  
  // execute stored proc
  cmd = new SqlCommand(@"sp_merge_bulk_data", conn);
  cmd.ExecuteNonQuery();
  
  // execute clean up
  cmd = new SqlCommand(@"drop table #test", conn);
  cmd.ExecuteNonQuery();
}

The above stored procedure would operate on the #test temp table. Since the connection is not closed until the end* we can perform multiple operations with the temp table. *It's actually released back into the connection pool when it is disposed/closed, it's not closed until it is no longer needed by the connection pool.

In node-mssql, connection closing/pooling is also handled for you automatically when you create a new connection.

let conn = new sql.Connection(opts);

Now here's the kicker. Unlike C#'s SqlConnection, node-mssql will automatical release connections back into the pool when an operation completes. This has an annoying side effect that prevents you from working directly with a connection using the Request class.

Now, node-myssql does have provisions for using a single connection for multiple operations via PreparedStatements but those don't support bulk inserts.

So my solution to this was to create an isolated pool for each "bulk operation" that I wanted to perform. This ensures that the same connection will be used during the entire bulk process. It also allows me to destroy the connection when it's complete without risking collision of other requests piggybacking on the same connection (this presents problems if when creating a temp table that is named the same thing).

This is what the process looks like in node-mssql:

async function bulkTest() {

  // create a unique connection pool for this op
  let opts = {
    server: 'server',
    database: 'test',
    user: 'testuser',
    password: 'testuser',
  };
  let conn = new sql.Connection(opts);
  await conn.connect();

  let req;
  try {
    // create the temp table
    let table = new sql.Table('#test');
    table.create = true;
    table.columns.add('id', sql.Int, { nullable: false });
    
    // push some data into the buffer
    for(let i = 0; i < 10000; i ++) {
      table.rows.add(i);
    }

    // create a new request object with the isolated pool
    req = new sql.Request(conn);
    
    // bulk insert into the temp table
    await req.bulk(table);

    // execute the store procedure
    await req.execute('sp_merge_bulk_data');
    
    // delete the temp table
    await req.batch('drop table #test');
  }
  finally {
    // shut down pool since we're no longer using it
    await conn.close();
  }
}

This is not that much different than the C# side of the house. The only thing we're doing is isolating the connection via a new pool. This is arguably more resource intensive, but it gets the job done.