grid.websocket_client

Module Contents

grid.websocket_client.MODEL_LIMIT_SIZE
class grid.websocket_client.WebsocketGridClient(hook, address, id: Union[int, str] = 0, auth: dict = None, is_client_worker: bool = False, log_msgs: bool = False, verbose: bool = False, chunk_size: int = MODEL_LIMIT_SIZE)

Bases: syft.WebsocketClientWorker, syft.federated.federated_client.FederatedClient

Websocket Grid Client.

url :str

Get Node URL Address.

Returns:Node’s address.
Return type:address (str)
models :list

Get models stored at remote grid node.

Returns:List of models stored in this grid node.
Return type:models (List)
_update_node_reference(self, new_id: str)

Update worker references changing node id references at hook structure.

Parameters:new_id (str) – New worker ID.
parse_address(self, address: str)

Parse Address string to define secure flag and split into host and port.

Parameters:address (str) – Adress of remote worker.
get_node_id(self)

Get Node ID from remote node worker

Returns:node id used by remote worker.
Return type:node_id (str)
connect_nodes(self, node)

Connect two remote workers between each other. If this node is authenticated, use the same credentials to authenticate the candidate node.

Parameters:node (WebsocketGridClient) – Node that will be connected with this remote worker.
Returns:node response.
Return type:node_response (dict)
authenticate(self, user: Union[str, dict])

Perform Authentication Process using credentials grid credentials. Grid credentials can be loaded calling the function gr.load_credentials().

Parameters:user – String containing the username of a loaded credential or a credential’s dict.
Raises:RuntimeError – If authentication process fail.
_forward_json_to_websocket_server_worker(self, message: dict)

Prepare/send a JSON message to a remote grid node and receive the response.

Parameters:message (dict) – message payload.
Returns:response payload.
Return type:node_response (dict)
_forward_to_websocket_server_worker(self, message: bin)

Prepare/send a binary message to a remote grid node and receive the response. :param message: message payload. :type message: bytes

Returns:response payload.
Return type:node_response (bytes)
serve_model(self, model, model_id: str = None, mpc: bool = False, allow_download: bool = False, allow_remote_inference: bool = False)

Hosts the model and optionally serve it using a Socket / Rest API.

Parameters:
  • model – A jit model or Syft Plan.
  • model_id (str) – An integer or string representing the model id used to retrieve the model later on using the Rest API. If this is not provided and the model is a Plan we use model.id, if the model is a jit model we raise an exception.
  • allow_download (bool) – If other workers should be able to fetch a copy of this model to run it locally set this to True.
  • allow_remote_inference (bool) – If other workers should be able to run inference using this model through a Rest API interface set this True.
Returns:

True if model was served sucessfully, raises a RunTimeError otherwise.

Return type:

result (bool)

Raises:
  • ValueError – if model_id is not provided and model is a jit model (aka does not have an id attribute).
  • RunTimeError – if there was a problem during model serving.
run_remote_inference(self, model_id, data)

Run a dataset inference using a remote model.

Parameters:
  • model_id (str) – Model ID.
  • data (Tensor) – dataset to be inferred.
Returns:

Inference result

Return type:

inference (Tensor)

Raises:

RuntimeError – If an unexpected behavior happen, It will forward the error message.

_return_bool_result(self, result, return_key=None)
_send_http_request(self, route, data, request, N: int = 10, unhexlify: bool = True, return_response_text: bool = True)

Helper function for sending http request to talk to app.

Parameters:
  • route (str) – App route.
  • data (str) – Data to be sent in the request.
  • request (str) – Request type (GET, POST, PUT, …).
  • N (int) – Number of tries in case of fail. Default is 10.
  • unhexlify (bool) – A boolean indicating if we should try to run unhexlify on the response or not.
  • return_response_text (bool) – If True return response.text, return raw response otherwise.
Returns:

If return_response_text is True return response.text, return raw response otherwise.

Return type:

response (bool)

_send_streaming_post(self, route: str, data: dict = None)

Used to send large models / datasets using stream channel.

Parameters:
  • route (str) – Service endpoint
  • data (dict) – dict with tensors / models to be uploaded.
Returns:

response from server

Return type:

response (str)

_send_get(self, route, data=None, **kwargs)
delete_model(self, model_id: str)

Delete a model previously registered.

Parameters:model_id (String) – ID of the model that will be deleted.
Returns:If succeeded, return True.
Return type:result (bool)
download_model(self, model_id: str)

Download a model to run it locally.

Parameters:model_id (str) – ID of the model that will be downloaded.
Returns:Model to be downloaded.
Return type:model
Raises:RuntimeError – If an unexpected behavior happen, It will forward the error message.
serve_encrypted_model(self, encrypted_model: sy.messaging.plan.Plan)

Serve a model in a encrypted fashion using SMPC.

A wrapper for sending the model. The worker is responsible for sharing the model using SMPC.

Parameters:encrypted_model (syft.Plan) – A pĺan already shared with workers using SMPC.
Returns:True if model was served successfully, raises a RunTimeError otherwise.
Return type:result (bool)
__str__(self)