Trait aws_smithy_http_server_python::PyApp
source · pub trait PyApp: Clone + IntoPy<PyObject> {
Show 14 methods
// Required methods
fn workers(&self) -> &Mutex<Vec<PyObject>>;
fn context(&self) -> &Option<PyObject>;
fn handlers(&mut self) -> &mut HashMap<String, PyHandler>;
fn build_service(
&mut self,
event_loop: &PyAny,
) -> PyResult<BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>>;
// Provided methods
fn graceful_termination(&self, workers: &Mutex<Vec<PyObject>>) -> ! { ... }
fn immediate_termination(&self, workers: &Mutex<Vec<PyObject>>) -> ! { ... }
fn block_on_rust_signals(&self) { ... }
fn register_python_signals(
&self,
py: Python<'_>,
event_loop: PyObject,
) -> PyResult<()> { ... }
fn start_hyper_worker(
&mut self,
py: Python<'_>,
socket: &PyCell<PySocket>,
event_loop: &PyAny,
service: BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>,
worker_number: isize,
tls: Option<PyTlsConfig>,
) -> PyResult<()> { ... }
fn register_operation(
&mut self,
py: Python<'_>,
name: &str,
func: PyObject,
) -> PyResult<()> { ... }
fn configure_python_event_loop<'py>(
&self,
py: Python<'py>,
) -> PyResult<&'py PyAny> { ... }
fn run_server(
&mut self,
py: Python<'_>,
address: Option<String>,
port: Option<i32>,
backlog: Option<i32>,
workers: Option<usize>,
tls: Option<PyTlsConfig>,
) -> PyResult<()> { ... }
fn run_lambda_handler(&mut self, py: Python<'_>) -> PyResult<()> { ... }
fn build_and_configure_service(
&mut self,
py: Python<'_>,
event_loop: &PyAny,
) -> PyResult<BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>> { ... }
}
Expand description
Trait defining a Python application.
A Python application requires handling of multiple processes, signals and allows to register Python function that will be executed as business logic by the code generated Rust handlers. To properly function, the application requires some state:
workers
: the list of child Python worker processes, protected by a Mutex.context
: the optional Python object that should be passed inside the Rust state struct.handlers
: the mapping between an operation name and its PyHandler representation.
Since the Python application is spawning multiple workers, it also requires signal handling to allow the gracefull termination of multiple Hyper servers. The main Rust process is registering signal and using them to understand when it it time to loop through all the active workers and terminate them. Workers registers their own signal handlers and attaches them to the Python event loop, ensuring all coroutines are cancelled before terminating a worker.
This trait will be implemented by the code generated by the PythonApplicationGenerator
Kotlin class.
Required Methods§
sourcefn workers(&self) -> &Mutex<Vec<PyObject>>
fn workers(&self) -> &Mutex<Vec<PyObject>>
List of active Python workers registered with this application.
sourcefn context(&self) -> &Option<PyObject>
fn context(&self) -> &Option<PyObject>
Optional Python context object that will be passed as part of the Rust state.
sourcefn handlers(&mut self) -> &mut HashMap<String, PyHandler>
fn handlers(&mut self) -> &mut HashMap<String, PyHandler>
Mapping between operation names and their PyHandler
representation.
sourcefn build_service(
&mut self,
event_loop: &PyAny,
) -> PyResult<BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>>
fn build_service( &mut self, event_loop: &PyAny, ) -> PyResult<BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>>
Build the app’s Service
using given event_loop
.
Provided Methods§
sourcefn graceful_termination(&self, workers: &Mutex<Vec<PyObject>>) -> !
fn graceful_termination(&self, workers: &Mutex<Vec<PyObject>>) -> !
Handle the graceful termination of Python workers by looping through all the
active workers and calling terminate()
on them. If termination fails, this
method will try to kill()
any failed worker.
sourcefn immediate_termination(&self, workers: &Mutex<Vec<PyObject>>) -> !
fn immediate_termination(&self, workers: &Mutex<Vec<PyObject>>) -> !
Handler the immediate termination of Python workers by looping through all the
active workers and calling kill()
on them.
sourcefn block_on_rust_signals(&self)
fn block_on_rust_signals(&self)
Register and handler signals of the main Rust thread. Signals not registered in this method are ignored.
Signals supported:
- SIGTERM|SIGQUIT - graceful termination of all workers.
- SIGINT - immediate termination of all workers.
Other signals are NOOP.
sourcefn register_python_signals(
&self,
py: Python<'_>,
event_loop: PyObject,
) -> PyResult<()>
fn register_python_signals( &self, py: Python<'_>, event_loop: PyObject, ) -> PyResult<()>
Register and handle termination of all the tasks on the Python asynchronous event loop. We only register SIGQUIT and SIGINT since the main signal handling is done by Rust.
sourcefn start_hyper_worker(
&mut self,
py: Python<'_>,
socket: &PyCell<PySocket>,
event_loop: &PyAny,
service: BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>,
worker_number: isize,
tls: Option<PyTlsConfig>,
) -> PyResult<()>
fn start_hyper_worker( &mut self, py: Python<'_>, socket: &PyCell<PySocket>, event_loop: &PyAny, service: BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>, worker_number: isize, tls: Option<PyTlsConfig>, ) -> PyResult<()>
Start a single worker with its own Tokio and Python async runtime and provided shared socket.
Python asynchronous loop needs to be started and handled during the lifetime of the process and it is passed to this method by the caller, which can use configure_python_event_loop to properly setup it up.
We retrieve the Python context object, if setup by the user calling PyApp::context method,
generate the state structure and build the aws_smithy_http_server::routing::Router, filling
it with the functions generated by PythonServerOperationHandlerGenerator.kt
.
At last we get a cloned reference to the underlying [socket2::Socket].
Now that all the setup is done, we can start the two runtimes and run the [hyper] server.
We spawn a thread with a new [tokio::runtime], setup the middlewares and finally block the
thread on Hyper serve() method.
The main process continues and at the end it is blocked on Python loop.run_forever()
.
sourcefn register_operation(
&mut self,
py: Python<'_>,
name: &str,
func: PyObject,
) -> PyResult<()>
fn register_operation( &mut self, py: Python<'_>, name: &str, func: PyObject, ) -> PyResult<()>
Register a Python function to be executed inside the Smithy Rust handler.
There are some information needed to execute the Python code from a Rust handler, such has if the registered function needs to be awaited (if it is a coroutine) and the number of arguments available, which tells us if the handler wants the state to be passed or not.
sourcefn configure_python_event_loop<'py>(
&self,
py: Python<'py>,
) -> PyResult<&'py PyAny>
fn configure_python_event_loop<'py>( &self, py: Python<'py>, ) -> PyResult<&'py PyAny>
Configure the Python asyncio event loop.
First of all we install uvloop as the main Python event loop. Thanks to libuv, uvloop performs ~20% better than Python standard event loop in most benchmarks, while being 100% compatible. If uvloop is not available as a dependency, we just fall back to the standard Python event loop.
sourcefn run_server(
&mut self,
py: Python<'_>,
address: Option<String>,
port: Option<i32>,
backlog: Option<i32>,
workers: Option<usize>,
tls: Option<PyTlsConfig>,
) -> PyResult<()>
fn run_server( &mut self, py: Python<'_>, address: Option<String>, port: Option<i32>, backlog: Option<i32>, workers: Option<usize>, tls: Option<PyTlsConfig>, ) -> PyResult<()>
Main entrypoint: start the server on multiple workers.
The multiprocessing server is achieved using the ability of a Python interpreter
to clone and start itself as a new process.
The shared sockets is created and Using the multiprocessing::Process module, multiple
workers with the method self.start_worker()
as target are started.
NOTE: this method ends up calling self.start_worker
from the Python context, forcing
the struct implementing this trait to also implement a start_worker
method.
This is done to ensure the Python event loop is started in the right child process space before being
passed to start_hyper_worker
.
PythonApplicationGenerator.kt
generates the start_worker
method:
use std::convert::Infallible;
use std::collections::HashMap;
use pyo3::prelude::*;
use aws_smithy_http_server_python::{PyApp, PyHandler};
use aws_smithy_http_server::body::{Body, BoxBody};
use parking_lot::Mutex;
use http::{Request, Response};
use tower::util::BoxCloneService;
#[pyclass]
#[derive(Debug, Clone)]
pub struct App {};
impl PyApp for App {
fn workers(&self) -> &Mutex<Vec<PyObject>> { todo!() }
fn context(&self) -> &Option<PyObject> { todo!() }
fn handlers(&mut self) -> &mut HashMap<String, PyHandler> { todo!() }
fn build_service(&mut self, event_loop: &PyAny) -> PyResult<BoxCloneService<Request<Body>, Response<BoxBody>, Infallible>> { todo!() }
}
#[pymethods]
impl App {
#[pyo3(text_signature = "($self, socket, worker_number, tls)")]
pub fn start_worker(
&mut self,
py: pyo3::Python,
socket: &pyo3::PyCell<aws_smithy_http_server_python::PySocket>,
worker_number: isize,
tls: Option<aws_smithy_http_server_python::tls::PyTlsConfig>,
) -> pyo3::PyResult<()> {
let event_loop = self.configure_python_event_loop(py)?;
let service = self.build_service(event_loop)?;
self.start_hyper_worker(py, socket, event_loop, service, worker_number, tls)
}
}
sourcefn run_lambda_handler(&mut self, py: Python<'_>) -> PyResult<()>
fn run_lambda_handler(&mut self, py: Python<'_>) -> PyResult<()>
Lambda main entrypoint: start the handler on Lambda.