.Guarantee compatibility with multiple structures, including.NET 6.0,. Internet Platform 4.6.2, and.NET Criterion 2.0 as well as above.Lessen dependencies to stop model disputes as well as the necessity for tiing redirects.Transcribing Sound Files.One of the major performances of the SDK is audio transcription. Developers may record audio files asynchronously or even in real-time. Below is an instance of exactly how to record an audio documents:.making use of AssemblyAI.making use of AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var transcript = await client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local area data, similar code could be utilized to accomplish transcription.wait for utilizing var stream = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.stream,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK additionally reinforces real-time sound transcription utilizing Streaming Speech-to-Text. This function is actually especially beneficial for applications demanding instant processing of audio information.using AssemblyAI.Realtime.wait for using var scribe = brand-new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining audio from a microphone for example.GetAudio( async (part) => await transcriber.SendAudioAsync( chunk)).wait for transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK combines along with LeMUR to permit designers to build big foreign language model (LLM) functions on vocal data. Below is actually an example:.var lemurTaskParams = brand-new LemurTaskParams.Cue="Supply a brief review of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Cleverness Models.In addition, the SDK comes with integrated support for audio intelligence styles, making it possible for feeling study as well as various other enhanced functions.var transcript = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// POSITIVE, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more details, see the official AssemblyAI blog.Image resource: Shutterstock.