Struct aws_sdk_transcribestreaming::operation::start_medical_stream_transcription::builders::StartMedicalStreamTranscriptionFluentBuilder
source · pub struct StartMedicalStreamTranscriptionFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to StartMedicalStreamTranscription
.
Starts a bidirectional HTTP/2 or WebSocket stream where audio is streamed to Amazon Transcribe Medical and the transcription results are streamed to your application.
The following parameters are required:
-
language-code
-
media-encoding
-
sample-rate
For more information on streaming with Amazon Transcribe Medical, see Transcribing streaming audio.
Implementations§
source§impl StartMedicalStreamTranscriptionFluentBuilder
impl StartMedicalStreamTranscriptionFluentBuilder
sourcepub fn as_input(&self) -> &StartMedicalStreamTranscriptionInputBuilder
pub fn as_input(&self) -> &StartMedicalStreamTranscriptionInputBuilder
Access the StartMedicalStreamTranscription as a reference.
sourcepub async fn send(
self,
) -> Result<StartMedicalStreamTranscriptionOutput, SdkError<StartMedicalStreamTranscriptionError, HttpResponse>>
pub async fn send( self, ) -> Result<StartMedicalStreamTranscriptionOutput, SdkError<StartMedicalStreamTranscriptionError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self,
) -> CustomizableOperation<StartMedicalStreamTranscriptionOutput, StartMedicalStreamTranscriptionError, Self>
pub fn customize( self, ) -> CustomizableOperation<StartMedicalStreamTranscriptionOutput, StartMedicalStreamTranscriptionError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn language_code(self, input: LanguageCode) -> Self
pub fn language_code(self, input: LanguageCode) -> Self
Specify the language code that represents the language spoken in your audio.
Amazon Transcribe Medical only supports US English (en-US
).
sourcepub fn set_language_code(self, input: Option<LanguageCode>) -> Self
pub fn set_language_code(self, input: Option<LanguageCode>) -> Self
Specify the language code that represents the language spoken in your audio.
Amazon Transcribe Medical only supports US English (en-US
).
sourcepub fn get_language_code(&self) -> &Option<LanguageCode>
pub fn get_language_code(&self) -> &Option<LanguageCode>
Specify the language code that represents the language spoken in your audio.
Amazon Transcribe Medical only supports US English (en-US
).
sourcepub fn media_sample_rate_hertz(self, input: i32) -> Self
pub fn media_sample_rate_hertz(self, input: i32) -> Self
The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
sourcepub fn set_media_sample_rate_hertz(self, input: Option<i32>) -> Self
pub fn set_media_sample_rate_hertz(self, input: Option<i32>) -> Self
The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
sourcepub fn get_media_sample_rate_hertz(&self) -> &Option<i32>
pub fn get_media_sample_rate_hertz(&self) -> &Option<i32>
The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
sourcepub fn media_encoding(self, input: MediaEncoding) -> Self
pub fn media_encoding(self, input: MediaEncoding) -> Self
Specify the encoding used for the input audio. Supported formats are:
-
FLAC
-
OPUS-encoded audio in an Ogg container
-
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
sourcepub fn set_media_encoding(self, input: Option<MediaEncoding>) -> Self
pub fn set_media_encoding(self, input: Option<MediaEncoding>) -> Self
Specify the encoding used for the input audio. Supported formats are:
-
FLAC
-
OPUS-encoded audio in an Ogg container
-
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
sourcepub fn get_media_encoding(&self) -> &Option<MediaEncoding>
pub fn get_media_encoding(&self) -> &Option<MediaEncoding>
Specify the encoding used for the input audio. Supported formats are:
-
FLAC
-
OPUS-encoded audio in an Ogg container
-
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see Media formats.
sourcepub fn vocabulary_name(self, input: impl Into<String>) -> Self
pub fn vocabulary_name(self, input: impl Into<String>) -> Self
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
sourcepub fn set_vocabulary_name(self, input: Option<String>) -> Self
pub fn set_vocabulary_name(self, input: Option<String>) -> Self
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
sourcepub fn get_vocabulary_name(&self) -> &Option<String>
pub fn get_vocabulary_name(&self) -> &Option<String>
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
sourcepub fn specialty(self, input: Specialty) -> Self
pub fn specialty(self, input: Specialty) -> Self
Specify the medical specialty contained in your audio.
sourcepub fn set_specialty(self, input: Option<Specialty>) -> Self
pub fn set_specialty(self, input: Option<Specialty>) -> Self
Specify the medical specialty contained in your audio.
sourcepub fn get_specialty(&self) -> &Option<Specialty>
pub fn get_specialty(&self) -> &Option<Specialty>
Specify the medical specialty contained in your audio.
sourcepub fn type(self, input: Type) -> Self
pub fn type(self, input: Type) -> Self
Specify the type of input audio. For example, choose DICTATION
for a provider dictating patient notes and CONVERSATION
for a dialogue between a patient and a medical professional.
sourcepub fn set_type(self, input: Option<Type>) -> Self
pub fn set_type(self, input: Option<Type>) -> Self
Specify the type of input audio. For example, choose DICTATION
for a provider dictating patient notes and CONVERSATION
for a dialogue between a patient and a medical professional.
sourcepub fn get_type(&self) -> &Option<Type>
pub fn get_type(&self) -> &Option<Type>
Specify the type of input audio. For example, choose DICTATION
for a provider dictating patient notes and CONVERSATION
for a dialogue between a patient and a medical professional.
sourcepub fn show_speaker_label(self, input: bool) -> Self
pub fn show_speaker_label(self, input: bool) -> Self
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see Partitioning speakers (diarization).
sourcepub fn set_show_speaker_label(self, input: Option<bool>) -> Self
pub fn set_show_speaker_label(self, input: Option<bool>) -> Self
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see Partitioning speakers (diarization).
sourcepub fn get_show_speaker_label(&self) -> &Option<bool>
pub fn get_show_speaker_label(&self) -> &Option<bool>
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see Partitioning speakers (diarization).
sourcepub fn session_id(self, input: impl Into<String>) -> Self
pub fn session_id(self, input: impl Into<String>) -> Self
Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
You can use a session ID to retry a streaming session.
sourcepub fn set_session_id(self, input: Option<String>) -> Self
pub fn set_session_id(self, input: Option<String>) -> Self
Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
You can use a session ID to retry a streaming session.
sourcepub fn get_session_id(&self) -> &Option<String>
pub fn get_session_id(&self) -> &Option<String>
Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
You can use a session ID to retry a streaming session.
sourcepub fn audio_stream(
self,
input: EventStreamSender<AudioStream, AudioStreamError>,
) -> Self
pub fn audio_stream( self, input: EventStreamSender<AudioStream, AudioStreamError>, ) -> Self
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see Transcribing streaming audio.
sourcepub fn set_audio_stream(
self,
input: Option<EventStreamSender<AudioStream, AudioStreamError>>,
) -> Self
pub fn set_audio_stream( self, input: Option<EventStreamSender<AudioStream, AudioStreamError>>, ) -> Self
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see Transcribing streaming audio.
sourcepub fn get_audio_stream(
&self,
) -> &Option<EventStreamSender<AudioStream, AudioStreamError>>
pub fn get_audio_stream( &self, ) -> &Option<EventStreamSender<AudioStream, AudioStreamError>>
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see Transcribing streaming audio.
sourcepub fn enable_channel_identification(self, input: bool) -> Self
pub fn enable_channel_identification(self, input: bool) -> Self
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
For more information, see Transcribing multi-channel audio.
sourcepub fn set_enable_channel_identification(self, input: Option<bool>) -> Self
pub fn set_enable_channel_identification(self, input: Option<bool>) -> Self
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
For more information, see Transcribing multi-channel audio.
sourcepub fn get_enable_channel_identification(&self) -> &Option<bool>
pub fn get_enable_channel_identification(&self) -> &Option<bool>
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
For more information, see Transcribing multi-channel audio.
sourcepub fn number_of_channels(self, input: i32) -> Self
pub fn number_of_channels(self, input: i32) -> Self
Specify the number of channels in your audio stream. Up to two channels are supported.
sourcepub fn set_number_of_channels(self, input: Option<i32>) -> Self
pub fn set_number_of_channels(self, input: Option<i32>) -> Self
Specify the number of channels in your audio stream. Up to two channels are supported.
sourcepub fn get_number_of_channels(&self) -> &Option<i32>
pub fn get_number_of_channels(&self) -> &Option<i32>
Specify the number of channels in your audio stream. Up to two channels are supported.
sourcepub fn content_identification_type(
self,
input: MedicalContentIdentificationType,
) -> Self
pub fn content_identification_type( self, input: MedicalContentIdentificationType, ) -> Self
Labels all personal health information (PHI) identified in your transcript.
Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.
For more information, see Identifying personal health information (PHI) in a transcription.
sourcepub fn set_content_identification_type(
self,
input: Option<MedicalContentIdentificationType>,
) -> Self
pub fn set_content_identification_type( self, input: Option<MedicalContentIdentificationType>, ) -> Self
Labels all personal health information (PHI) identified in your transcript.
Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.
For more information, see Identifying personal health information (PHI) in a transcription.
sourcepub fn get_content_identification_type(
&self,
) -> &Option<MedicalContentIdentificationType>
pub fn get_content_identification_type( &self, ) -> &Option<MedicalContentIdentificationType>
Labels all personal health information (PHI) identified in your transcript.
Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.
For more information, see Identifying personal health information (PHI) in a transcription.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for StartMedicalStreamTranscriptionFluentBuilder
impl !RefUnwindSafe for StartMedicalStreamTranscriptionFluentBuilder
impl Send for StartMedicalStreamTranscriptionFluentBuilder
impl Sync for StartMedicalStreamTranscriptionFluentBuilder
impl Unpin for StartMedicalStreamTranscriptionFluentBuilder
impl !UnwindSafe for StartMedicalStreamTranscriptionFluentBuilder
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling [Attribute
] value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
[Quirk
] value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the [Condition
] value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);