
YumaPro gRPC Manual¶
YumaPro gRPC User Guide¶
This document describes the gRPC integration within the netconfd-pro server and ypgrpc-go-app application.

gRPC Introduction¶
The gRPC protocol uses protobuf for data transfer between the client and server.
The gRPC server and netconfd-pro server integration is deployed with help of ypgrpc-go-app application that transfers information between integrated gRPC server and gRPC clients and netconfd-pro server. This is an application that provides faster and easier platform to implement gRPC Services. It is similar to db-api-app where users create instrumentation for their Services and RPCs.
Note
gRPC Features¶
The main YumaPro gRPC functionality and integration features include:
Works with any developer-provided .proto files
The platform is the ypgrpc-go-app application, which is written in Golang
The client gRPC code can be maintained in any language and can be used by any other tool to send a gRPC request for the data, it can be auto generated code, or some GUI tools, or CLI tools, etc
Client sends gRPC requests directly to the ypgrpc-go-app
Subsystem reports when gRPC stream starts and ends to the netconfd-pro server for monitoring information
Stub code is generated using the 'protoc' tool
Stub code is integrated into ypgrpc-go-app similar to db-api-app
All possible gRPC examples provided for faster and easier deployment, including:
Empty request and Empty response RPC
gRPC that represent single request and response
gRPC that represent single request and a streaming response
gRPC that represent a sequence of requests and a single responses
gRPC that represent a sequence of requests and responses with multiple different scenarios
The netconfd-pro server is a controller that provides more functionality to the gRPC client – server communication. The netconfd-pro server has the following interaction with gRPC server:
ypgrpc-go-app application registers its capabilities and all the information about gRPC Services
List of available Services
List of available RPCs
List of open streams and when they were started
Counters to keep track of open and closed streams
List of supported .proto files
Name, address and the port number of the gRPC server and when it was started
<grpc-shutdown> RPC operation to shutdown the gRPC server
ypgrpc-go-app¶
The ypgrpc-go-app application is YControl subsystem (similar to db-api-app) that communicates with the netconfd-pro server and also it is a gRPC server that communicates with RPC clients. The main role of the ypgrpc-go-app application is to host gRPC server and provide common place to implement and instrument gRPC services and also provide monitoring and control using netconfd-pro server.
ypgrpc-go-app Features¶
The ypgrpc-go-app application provides following features:
Common place to implement and instrument .proto Services and RPCs
Single gRPC Server to handle
Subsystem reports to the netconfd-pro server with available capabilities
Subsystem reports when subscriptions start and end to the netconfd-pro server for monitoring information
Remote monitoring and control of gRPC server using netconfd-pro server
Possibility to remotely shutdown the gRPC server using netconfd-pro server
ypgrpc-go-app Processing¶

The above diagram illustrates deployment of the gRPC server, all its Services and messages handling and how the netconfd-pro server is integrated into this deployment.
The ypgrpc-go-app application is written in GO language and talks to the netconfd-pro server via socket and acts as a YControl subsystem (similar to db-api-app).
gRPC clients can be written in any languages that are supported for gRPC clients. The client part is out of the scope of this document and the current gRPC protocol integration does not include client part. The clients communicate to the gRPC server with help of the ypgrpc-go-app application and send gRPC request to the application.
The core of the ypgrpc-go-app is the gRPC server and integrated stub code from auto generated files from .proto files:
Handle integrated stub code callbacks. Callbacks that are integrated from stub code that were generated from .proto files using protoc tool
Register gRPC server for the protobuf message handling and gRPC Services callback invocation
Run main Serve loop that handles all the client/server communication
The processing between gRCP client to the netconfd-pro server can be split into following components:
gRPC clients to the ypgrpc-go-app processing: includes message parsing, gRPC Services callback invocation
ypgrpc-go-app application to netconfd-pro processing: includes YControl messages exchange and stream information exchange when a new stream opens or closes
netconfd-pro internal processing: includes subsystem registration, subsystem messages handling and parsing, gRPC monitoring information handling (gRPC server and streams status)
The ypgrpc-go-app implements multiple goroutines to manage the replies and the clients. All of this managers are goroutines. They run in parallel and asynchronously. The following goroutines are implemented in the ypgrpc-go-app:
Reply Manager goroutine: This manager is responsible for any already parsed messages from the netconfd-pro server or gRPC client, it stores any not processed messages that are ready to be processed
Message Manager goroutine: This manager is responsible for storing any ready to be processed messages that are going to the netconfd-pro server and that are coming back from the server
Startup Procedure¶
The ypgrpc-go-app application has the following startup steps for the gRPC server component:
Initialize all the prerequisites and parse all the CLI parameters
Open TCP socket to listen for clients requests
Serve any incoming gRPC messages from gRPC clients and send open-stream-event or close-stream-event to the netconfd-pro server if needed with help of all the goroutine managers.
The ypgrpc-go-app acts as a YControl subsystem (similar to db-api-app), however, it does not terminate after one edit or get request. Instead it continuously listens to the netconfd-pro server and keeps the AF_LOCAL or TCP socket open to continue communication whenever it's needed.
The communication is terminated only if the ypgrpc-go-app application is terminated, the netconfd-pro server terminates, or the netconfd-pro sends the request to terminate the ypgrpc-go-app application. All the message definitions described in the yumaworks-yp-grpc.yang YANG module.
The ypgrpc-go-app application has the following startup steps to initialize connection with the netconfd-pro server:
Initialize all the prerequisites and parse all the CLI parameters
Based on the --proto CLI parameter load all the .proto files and create capability structure for provided .proto files
Open socket and send <ncx-connect> request to the server with
transport = netconf-aflocal
protocol = yp-grpc
Register yp-grpc service
Send <register-request> to the server
Register ypgrpc-go-app subsystem and initialize all corresponding code in the netconfd-pro server to be ready to handle ypgrpc-go-app application requests
Send 'capability-ad-event' message to the netconfd-pro server to advertise all available Services, Methods and Streams
Keep listening socket until terminated
ypgrpc-go-app Configuration Parameter List¶
The following configuration parameters are used by ypgrpc-go-app. Refer to the CLI Reference for more details.
ypgrpc-go-app CLI Parameters
Parameter |
Description |
---|---|
Specifies the gRPC server CA certificate file |
|
Specifies the gRPC server certificate file |
|
Specifies whether the ypgrpc-go-app should use File system Hierarchy Standard (FHS) directory locations to create, store and use data and files |
|
Directs to skip TLS validation |
|
Specifies the gRPC server private key file |
|
Specifies the log file for the ypgrpc-go-app application |
|
Directs that log output will be be sent to STDOUT, after being sent to the log file and/or local syslog daemon |
|
Controls the verbosity level of messages printed to the log file or STDOUT, if no log file is specified |
|
Specifies the port value to use for gRPC server connections |
|
Specifies the .proto file for the ypgrpc-go-app application to use |
|
Specifies the file search path for .proto files |
|
Specifies the netconfd-pro server IP address |
|
Specifies the subsystem identifier (gRPC Server ID) to use when registering with the netconfd-pro server |
ypgrpc-go-app Source Files¶
The following table lists the files that are included within
the netconf/src/ypgrpc/src/ypgrpc
directory.
Directory |
Description |
cli |
Handle the CLI parameters for ypgrpc-go-app application |
credentials |
Package credentials loads certificates and validates user credentials. |
examples |
Stub code example for .proto files (helloworld and example .protos) |
log |
Handle the Logging for ypgrpc-go-app application |
message_handler |
Auto-generated gostruct representation of the yumaworks-yp-grpc.yang file. Used for message handling |
netconfd_connect |
Handler for the netconfd-pro connection with ypgrpc-go-app |
proto |
.proto files handling, parsing, search and storing |
utils |
Generic utility functions |
ycontrol |
Utilities to handle the netconfd-pro YControl messages and connections |
The 'ypgrpc-go-app.go' file that can be found in the
netconf/src/ypgrpc/src/ypgrpc
directory is a source code file that
contains a main function that provides gRPC server functionality,
connectivity to the netconfd-pro server and stub code gRPC Services
callback handling.
ypgrpc-go-app Installation¶
The following sections describe the steps to install and test ypgrpc-go-app application.
ypgrpc-go-app Prerequisites¶
Install the Go programming language.
Version 'go1.15' or higher is required. To verify the installation and to verify the version of the installed GO run the following:
mydir> go version go version go1.10 linux/amd64
Install the Protocol Buffer Compiler
To verify the installation and to verify the version of the installed compiler run the following:
mydir> protoc --version # Ensure compiler version is 3+
Install the Go plugins for the protocol compiler:
Install the protocol compiler plugins for Go using the following commands:
$ go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.26 $ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.1
Update your PATH so that the 'protoc' compiler can find the plugins:
$ export PATH="$PATH:$(go env GOPATH)/bin"
The gRPC client applications are out of the scope of this document and the current gRPC protocol integration does not include client applications. Refer to the gRPC Tools page for more details.
To send the request to the gRPC server the following tool can be used as an example:
gRPC client cli Tool - generic gRPC command line client.
In order to install this tool download the binary and install it to
/usr/local
directory:
> sudo curl -L https://github.com/vadimi/grpc-client-cli/releases/download/v1.10.0/grpc-client-cli_darwin_x86_64.tar.gz \
| sudo tar -C /usr/local/bin -xz
> GO111MODULE=on go get -u github.com/vadimi/grpc-client-cli/cmd/grpc-client-cli@latest
To verify the installation and to verify the version of the installed client tool run the following:
> grpc-client-cli --version
grpc-client-cli version 1.10.0
You may need to update your $GOPATH/bin in order to run grpc-client-cli from you current directory, or run it as follows:
> $HOME/go/bin/grpc-client-cli --version
grpc-client-cli version 1.10.0
yp-grpc Binary Package Installation¶
If you do not have source code and need to install the YumaPro with a
binary package then the application will be installed in the default
/usr/bin
location. If you'd like to use a different directory move
the binary to your desired location.
The YumaPro gRPC functionality is installed with yumapro-gnmi binary package. Refer to the yumapro-gnmi Binary Package Installation section for details on installing this package.
Binary Package ypgrpc-go-app Code Installation¶
The source code for ypgrpc-go-app application will be installed in
/usr/share/yumapro/src/ypgrpc/src/ypgrpc
directory for further modifications.
However, there is no need to modify or run the application from that
location if you want to test the application with out any modifications.
The application has two .proto files implementations build in to
illustrate its functionality. The .proto files are location at:
/usr/share/yumapro/src/ypgrpc/src/ypgrpc/examples/example/example.proto
/usr/share/yumapro/src/ypgrpc/src/ypgrpc/examples/helloworld/helloworld.proto
There are other auto-generated files such as 'pb.go' and '_grpc.pb.go'
In order to use the application with modifications and new .proto files
implementations the ypgrpc-go-app application can be modified and
rebuild from /usr/share/yumapro/src/ypgrpc/src/ypgrpc
directory or
copied to the default $GOPATH location for modifications, for example:
mydir> cp -r /usr/share/yumapro/src/ypgrpc/src/ypgrpc $HOME/go/src/ypgrpc
After that the ypgrpc-go-app application can be modified, updated
and run from $HOME/go/src/ypgrpc
directory.
Compile and execute the ypgrpc-go-app code:
$HOME/go/src/ypgrpc> go run ypgrpc-go-app.go --log-level=debug --fileloc-fhs \ --insecure --proto=helloworld --protopath=$HOME/protos
Setting up the gRPC Tools¶
Refer to the yumapro-gnmi Binary Package Installation section for additional details on setting up the Go tools needed for development.
The following steps are required:
create the go workspace directory
setup $GOBIN and $GOPATH variables
install all dependencies
Create the workspace directory,
$HOME/go
Set the $GOPATH environment variable
mydir> mkdir -p ~/go
The $GOPATH can be any directory on your system. $HOME/go
is
the default $GOPATH on Unix-like systems since Go 1.8. Note that
$GOPATH must not be the same path as your Go installation.
Edit ~/.bash_profile
(or ~/.bashrc
if that is present instead)
to add the following lines:
export GOPATH=$HOME/go
export GOBIN=$GOPATH/bin
Save and exit your editor. Then, source this file
mydir> source ~/.bash_profile
Using the 'go get' to install the following. Note that in this case $GOBIN and $GOPATH should be already setup:
Install Using make
This example assumes the ypwork
source tree is in the home directory.
> ~/ypwork/netconf/src/ypgrpc$ make goget
Install using goget
The packages can be installed directly using these commands:
mydir> GO111MODULE=off go get github.com/aws/aws-sdk-go/aws
mydir> GO111MODULE=off go get google.golang.org/grpc
mydir> GO111MODULE=off go get google.golang.org/grpc/codes
mydir> GO111MODULE=off go get google.golang.org/grpc/status
mydir> GO111MODULE=off go get google.golang.org/protobuf/types/known/emptypb
mydir> GO111MODULE=off go get golang.org/x/text/encoding/unicode
mydir> GO111MODULE=off go get golang.org/x/text/transform
mydir> GO111MODULE=off go get github.com/jessevdk/go-flags
mydir> GO111MODULE=off go get github.com/openconfig/goyang/pkg/yang
mydir> GO111MODULE=off go get github.com/openconfig/ygot/ygot
mydir> GO111MODULE=off go get github.com/openconfig/ygot/ytypes
mydir> GO111MODULE=off go get github.com/jhump/protoreflect/desc
mydir> GO111MODULE=off go get github.com/jhump/protoreflect/desc/protoparse
mydir> GO111MODULE=off go get github.com/clbanning/mxj
mydir> GO111MODULE=off go get github.com/golang/protobuf/proto
mydir> GO111MODULE=off go get github.com/davecgh/go-spew/spew
mydir> GO111MODULE=off go get github.com/golang/protobuf/protoc-gen-go/descriptor
If you have installed the YumaPro from source code then you need to build and install using WITH_GRPC=1 and WITH_YCONTROL=1 build variables. Build the netconfd-pro server with gRPC support:
make WITH_YCONTROL=1 WITH_GRPC=1
sudo make WITH_YCONTROL=1 WITH_GRPC=1 install
Refer to the Setting up a Custom GO Workspace to use a custom workspace.
Source Code ypgrpc-go-app Installation¶
The source code for ypgrpc-go-app application will be in
/netconf/src/ypgrpc
directory. However, there is no need to modify
or run the application from that location if you want to test the
application with out any modifications. The application has two .proto
files. Tools setup is described earlier,
in the Binary Package ypgrpc-go-app Code Installation section.
Generate the client and server certificates if gRPC client uses TLS validation. Refer to Generate the CA Certificates for more details.
Running ypgrpc-go-app¶
Run the server with --with-grpc=true CLI parameter as follows:
mydir> sudo netconfd-pro --log-level=debug4 --with-grpc=true -- fileloc-fhs=true
Start the ypgrpc-go-app application. Note that you have to provide the certificates to start the application:
mydir> ypgrpc-go-app --cert=~/certs/server.crt --ca=~/certs/ca.crt \
--key=~/certs/server.key --fileloc-fhs *--protopath=$HOME/protos \
--proto=helloworld --proto=example
It can also be run in “insecure” mode for test or verification:
mydir> ypgrpc-go-app --log-level=debug --fileloc-fhs --insecure \
--protopath=$HOME/protos --proto=helloworld --proto=example
After this step the gRPC server starts to listen for any gRPC client requests and will handle all the request for the provided example.proto and helloworld.proto .proto files and will advertise gRPC capabilities and example.proto, helloworld.proto Services to the netconfd-pro server.
Closing ypgrpc-go-app¶
The ypgrpc-go-app can be shut down by typing Ctrl-C in the window that started the application.
If the netconfd-pro server is not running when ypgrpc-go-app is started the application will terminate with an error message stating that the netconfd-pro server is not running.
If the netconfd-pro server is shut down then ypgrpc-go-app will also shutdown.
The netconfd-pro server has <grpc-shutdown> NETCONF operation that can be triggered to shut down the ypgrpc-go-app application.
Proto Search Path¶
The ypgrpc-go-app uses configurable search paths to find .proto files that are needed during operation.
If the --protopath parameter is specified with a path, that search path is tried, relative to the current working directory. If it is not found then the search terminates in failure. Sub-directories will be searched.
--protopath=../../protos
If the --proto parameter is specified without the --protopath path, then the
$HOME/protos
directory is checked by default. Sub-directories will be searched.
ypgrpc-go-app Quick Start Guide¶
As in many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client calls.
The ypgrpc-go-app subsystem provides an unified place where all the interfaces can be implemented and runs a gRPC server to handle client calls.
The first step when working with protocol buffers is to define the structure for the data to serialize in a .proto file
this is an ordinary text file with a .proto extension.
Protocol buffer data is structured as messages, where each message is a small logical record of information containing a series of name-value pairs called fields.
The next step is to define gRPC services in ordinary .proto files,
with RPC method parameters and return types specified as protocol buffer messages.
Refer to $HOME/go/src/ypgrpc/examples/helloworld/helloworld.proto
for an example.
/* The greeting service definition */
service Greeter {
/* Sends a greeting */
rpc SayHello (HelloRequest) returns (HelloReply) {}
/* Sends another greeting */
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
/* The request message containing the user's name */
message HelloRequest {
string name = 1;
}
/* The response message containing the greetings */
message HelloReply {
string message = 1;
}
ypgrpc-go-app Application Helloworld Example¶
Run the server with --with-grpc=true CLI parameter as follows:
mydir> sudo netconfd-pro --log-level=debug4 --with-grpc=true --fileloc-fhs=true
The example code of ypgrpc-go-app after installation should be
copied to $HOME/go/src/ypgrpc
and example Service
implementation copied to $HOME/go/src/ypgrpc/examples
.
In order to run the example application:
Change to the example directory:
> cd $HOME/go/src/ypgrpc
Compile and execute the ypgrpc-go-app code:
ypgrpc> go run ypgrpc-go-app.go --log-level=debug --fileloc-fhs --insecure --proto=helloworld --protopath=$HOME/protos
Run the client application:
The gRPC client applications are out of the scope of this document. In this example grpc-client-cli tool is used:
> grpc-client-cli --proto $HOME/protos/helloworld.proto localhost:50830
? Choose a service: helloworld.Greeter
? Choose a method: SayHello
Message json (type ? to see defaults): {"name":"An example name"}
{
"message": "Hello An example name"
}
After the request is sent the gRPC server will run the corresponding callback and reply to the client request with a message as defined in the .proto files. The ypgrpc-go-app application log may look as follows:
ypgrpc_server: Starting to serve
HelloRequest:{
"name": "An example name"
}
Update ypgrpc-go-app Services¶
The gRPC service is defined using the following . proto file.
/* The greeting service definition */
service Greeter {
/* Sends a greeting */
rpc SayHello (HelloRequest) returns (HelloReply) {}
/* Sends another greeting */
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
}
/* The request message containing the user's name */
message HelloRequest {
string name = 1;
}
/* The response message containing the greetings */
message HelloReply {
string message = 1;
}
The .proto file will generate both client and server stub code that have a SayHello() RPC method, that takes a HelloRequest parameter from the client, and returns a HelloReply from the server.
To add a service definition, open $HOME/go/src/ypgrpc/examples/helloworld/helloworld.proto
and add a new SayHelloOneMore() method, with the same request and
response types:
/* The greeting service definition */
service Greeter {
/* Sends a greeting */
rpc SayHello (HelloRequest) returns (HelloReply) {}
/* Sends another greeting */
rpc SayHelloAgain (HelloRequest) returns (HelloReply) {}
/* Sends another greeting */
rpc SayHelloOneMore (HelloRequest) returns (HelloReply) {}
}
/* The request message containing the user's name */
message HelloRequest {
string name = 1;
}
/* The response message containing the greetings */
message HelloReply {
string message = 1;
}
Regenerate gRPC Code¶
Before you can use the new service method, you need to recompile the updated .proto file.
In the examples directory, run the following command:
> cd $HOME/go/src/ypgrpc/examples
> protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. \
--go-grpc_opt=paths=source_relative helloworld/helloworld.proto
This will regenerate the helloworld/helloworld.pb.go
and helloworld/helloworld_grpc.pb.go
files, which contain:
Code for populating, serializing, and retrieving HelloRequest and HelloReply message types
Generated server stub code to integrate into ypgrpc-go-app application
Server Code will be used by ypgrpc-go-app application that will register this Service and Methods and where the instrumentation will be done.
Update ypgrpc-go-app¶
After the regenerated server code is done, it can be implemented, called and integrated into the ypgrpc-go-app application.
Open $HOME/go/src/ypgrpc/ypgrpc-go-app.go
and add the following
function to it:
/* SayHelloOneMore implements helloworld.GreeterServer */
func (s *helloworldServer) SayHelloOneMore (ctx context.Context,
in *helloworld.HelloRequest) (
*helloworld.HelloReply,
error) {
log.Log_info("\n\nHelloRequest:")
log.Log_dump_structure(in)
return &helloworld.HelloReply{
Message: "Say Hello OneMore" + in.GetName(),
}, nil
}
Run Updated ypgrpc-go-app¶
Run the netconfd-pro server and the ypgrpc-go-app application:
Run the netconfd-pro server:
mydir> sudo netconfd-pro --log-level=debug4 --with-grpc=true --fileloc-fhs=true
Change to the example directory:
> cd $HOME/go/src/ypgrpc
Run the updated ypgrpc-go-app application:
ypgrpc> go run ypgrpc-go-app.go --log-level=debug --fileloc-fhs --insecure \ --proto=helloworld --protopath=$HOME/protos
Run the client application:
The gRPC client applications are out of the scope of this document. In this example grpc-client-cli tool is used:
> grpc-client-cli --proto $HOME/protos/helloworld.proto localhost:50830
? Choose a service: helloworld.Greeter
? Choose a method: SayHelloOneMore
Message json (type ? to see defaults): {"name":" An example name"}
{
"message": "Say Hello OneMore An example name"
}
After the request is sent the gRPC server will run the corresponding callback and reply to the client request with a message as defined in the .proto files. The ypgrpc-go-app application log may look as follows:
ypgrpc_server: Starting to serve
HelloRequest:{
"name": " An example name"
}
ypgrpc-go-app and gRPC Services¶
This tutorial provides a basic Go introduction to working with ypgrpc-go-app and gRPC callbacks.
The first step is to define the gRPC Service and the method
request and response types using protocol buffers. For the
complete .proto file, see
$HOME/go/src/ypgrpc/examples/example/example.proto
gRPC defines five kinds of service methods, all of which are used in the ExampleService service:
An empty request and empty response RPC
/* Empty request And Empty response RPC */ rpc EmptyCall(google.protobuf.Empty) returns (google.protobuf.Empty);
A simple RPC where the client sends a request to the server using the stub and waits for a response to come back, just like a normal function call.
/* RPC that represent single request and response * The server returns the client payload as-is. */ rpc UnaryCall(SimpleRequest) returns (SimpleResponse);
A server-side streaming RPC where the client sends a request to the server and gets a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages. To specify a server-side streaming method, place the stream keyword before the response type.
/* RPC that represent single request and a streaming response * The server returns the payload with client desired type and sizes. */ rpc StreamingOutputCall(StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse);
A client-side streaming RPC where the client writes a sequence of messages and sends them to the server, again using a provided stream. Once the client has finished writing the messages, it waits for the server to read them all and return its response. To specify a client-side streaming method, place the stream keyword before the request type.
/* RPC that represent a sequence of requests and a sngle responses * The server returns the aggregated size of client payload as the result. */ rpc StreamingInputCall(stream StreamingInputCallRequest) returns (StreamingInputCallResponse);
A bidirectional streaming RPC where both sides send a sequence of messages using a read-write stream. The two streams operate independently, so clients and servers can read and write in whatever order they like: for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes. The order of messages in each stream is preserved. To specify this type of method, place the stream keyword before both the request and the response.
/* RPC that represent a sequence of requests and responses * with each request served by the server immediately. * As one request could lead to multiple responses, this interface * demonstrates the idea of full duplexing. */ rpc FullDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse); /* RPC that represent a sequence of requests and responses. * The server buffers all the client requests and then serves them in order. * A stream of responses are returned to the client when the server starts with * first request. */ rpc HalfDuplexCall(stream StreamingOutputCallRequest) returns (stream StreamingOutputCallResponse);
The .proto file also contains protocol buffer message type definitions for all the request and response types used in our service methods - for example, here is the SimpleRequest message type:
/* Unary request */
message SimpleRequest {
EchoStatus response_status = 1;
User user = 2;
}
ypgrpc-go-app Application¶
The ypgrpc-go-app application provides an interface to add new gRPC Services and implement new Methods.
This application shows an example of how the gRPC interface can be used and how the Services and methods can be implemented.
Example ypgrpc-go-app Application:
/** helloworldServer is used to implement helloworld.GreeterServer */
type helloworldServer struct {
helloworld.UnimplementedGreeterServer
}
/** exampleServer is used to implement example.ExampleServiceServer */
type exampleServer struct {
example.UnimplementedExampleServiceServer
}
/** @} */
/********************************************************************
* *
* F U N C T I O N S *
* *
*********************************************************************/
/* SayHello implements helloworld.GreeterServer */
func (s *helloworldServer) SayHello (ctx context.Context,
in *helloworld.HelloRequest) (
*helloworld.HelloReply,
error) {
log.Log_info("\n\nHelloRequest:")
log.Log_dump_structure(in)
return &helloworld.HelloReply{
Message: "Hello " + in.GetName(),
}, nil
}
/* SayHelloAgain implements helloworld.GreeterServer */
func (s *helloworldServer) SayHelloAgain (ctx context.Context,
in *helloworld.HelloRequest) (
*helloworld.HelloReply,
error) {
log.Log_info("\n\nHelloRequest:")
log.Log_dump_structure(in)
return &helloworld.HelloReply{
Message: "Say Hello Again" + in.GetName(),
}, nil
}
/* SayHelloOneMore implements helloworld.GreeterServer */
func (s *helloworldServer) SayHelloOneMore (ctx context.Context,
in *helloworld.HelloRequest) (
*helloworld.HelloReply,
error) {
log.Log_info("\n\nHelloRequest:")
log.Log_dump_structure(in)
return &helloworld.HelloReply{
Message: "Say Hello OneMore" + in.GetName(),
}, nil
}
/* Empty request And Empty response RPC */
func (exampleServer) EmptyCall (ctx context.Context,
in *emptypb.Empty) (
*emptypb.Empty,
error) {
log.Log_info("\n\nEmptyRequest:")
log.Log_dump_structure(in)
/* Empty input and putput.
* Run your Instrumentation here
*/
return in, nil
}
/* RPC that represent single request and response
* The server returns the client payload as-is.
*/
func (exampleServer) UnaryCall (ctx context.Context,
in *example.SimpleRequest) (
*example.SimpleResponse,
error) {
log.Log_info("\n\nSimpleRequest:")
log.Log_dump_structure(in)
/* The Incoming Code must be 0 to signal that request is well formed */
if in.ResponseStatus != nil && in.ResponseStatus.Code != int32(codes.OK) {
log.Log_info("\nReceived Error Code: %v",
in.GetResponseStatus().GetCode())
return nil, status.Error(codes.Code(in.ResponseStatus.Code), "error")
}
return &example.SimpleResponse{
User: &example.User{
Id: in.GetUser().GetId(),
Name: in.GetUser().GetName(),
},
}, nil
}
/* RPC that represent single request and a streaming response
* The server returns the payload with client desired type and sizes.
*/
func (exampleServer) StreamingOutputCall (in *example.StreamingOutputCallRequest,
stream example.ExampleService_StreamingOutputCallServer) error {
/* Example Request:
{ "response_parameters":[
{"size":10,"interval_us":5},
{"size":10,"interval_us":6}
],
"user": {"id":13,"name":"Example-1"},
"response_status":{"code":0,"message":"Example message 1"}
}
*/
log.Log_info("\n\nStreamingOutputCallRequest:")
log.Log_dump_structure(in)
rsp := &example.StreamingOutputCallResponse{
User: &example.User{},
}
/* Starting/ Closing a Server Stream. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"StreamingOutputCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"StreamingOutputCall")
count := int32(1)
for _, param := range in.ResponseParameters {
log.Log_info("\nInterval between responses: %v\n",
param.GetIntervalUs())
/* Wait as specified in the interval parameter */
time.Sleep(time.Duration(param.GetIntervalUs()) * time.Second)
if stream.Context().Err() != nil {
/* Closing a Server Stream. Update monitoring information. */
return stream.Context().Err()
}
buf := ""
for i := 0; i < int(param.GetSize()); i++ {
buf += in.GetUser().GetName()
}
count++
rsp.User.Id = count
rsp.User.Name = buf
if err := stream.Send(rsp); err != nil {
/* Closing a Server Stream. Update monitoring information. */
return err
}
}
log.Log_info("\nDone Streaming")
return nil
}
/* RPC that represent a sequence of requests and a single responses
* The server returns the aggregated size of client payload as the result.
*/
func (exampleServer) StreamingInputCall (stream example.ExampleService_StreamingInputCallServer) error {
log.Log_info("\n\nExampleService_StreamingInputCallServer:")
log.Log_dump_structure(stream)
/* Starting/ Closing a Client Stream. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"StreamingInputCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"StreamingInputCall")
size := 0
for {
req, err := stream.Recv()
if err == io.EOF {
return stream.SendAndClose(&example.StreamingInputCallResponse{
AggregatedPayloadSize: int32(size),
})
}
size += len(req.User.Name)
if err != nil {
return err
}
}
}
/* RPC that represent a sequence of requests and responses
* with each request served by the server immediately.
* As one request could lead to multiple responses, this interface
* demonstrates the idea of full duplexing.
*/
func (exampleServer) FullDuplexCall (stream example.ExampleService_FullDuplexCallServer) error {
log.Log_info("\n\nExampleService_FullDuplexCallServer:")
log.Log_dump_structure(stream)
/* Starting/ Closing a Client/ Server Streams. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"FullDuplexCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"FullDuplexCall")
for {
req, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return status.Error(codes.Internal, err.Error())
}
if req.ResponseStatus != nil && req.ResponseStatus.Code != int32(codes.OK) {
return status.Error(codes.Code(req.ResponseStatus.Code), "error")
}
resp := &example.StreamingOutputCallResponse{User: &example.User{}}
for _, param := range req.ResponseParameters {
if stream.Context().Err() != nil {
return stream.Context().Err()
}
buf := ""
for i := 0; i < int(param.GetSize()); i++ {
buf += req.GetUser().GetName()
}
resp.User.Name = buf
if err := stream.Send(resp); err != nil {
return err
}
}
}
}
/* RPC that represent a sequence of requests and responses.
* The server buffers all the client requests and then serves them in order.
* A stream of responses are returned to the client when the server starts with
* first request.
*/
func (exampleServer) HalfDuplexCall (stream example.ExampleService_HalfDuplexCallServer) error {
log.Log_info("\n\nExampleService_HalfDuplexCallServer:")
log.Log_debug_dump(stream)
/* Starting/ Closing a Client/ Server Streams. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"HalfDuplexCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"HalfDuplexCall")
requests := []*example.StreamingOutputCallRequest{}
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return status.Error(codes.Internal, err.Error())
}
requests = append(requests, req)
}
for _, req := range requests {
resp := &example.StreamingOutputCallResponse{User: &example.User{}}
for _, param := range req.ResponseParameters {
if stream.Context().Err() != nil {
return stream.Context().Err()
}
buf := ""
for i := 0; i < int(param.GetSize()); i++ {
buf += req.GetUser().GetName()
}
resp.User.Name = buf
if err := stream.Send(resp); err != nil {
return err
}
}
}
return nil
}
/**
* @brief MAIN IO server loop for the gRPC manager
*
*/
func main () {
var res error = nil
/* Connecton to netconfd-pro server:
* 1) Parse CLI parameters:
* - Host
* - subsys-id
* - user
* - proto files
* - etc
* 2) Open Socket and send NCX-Connect request
* 3) Start to listen on the socket (select loop)
* 4) Register YControl service (gRPC service)
* 5) send <capability-ad-event> event
* 7) Start to listen for any request <-> response
*
*/
/* Parse all the CLI parameters */
res = cli.ParseCLIParameters()
if res != nil {
utils.Check_error(res)
}
/* set logging parameters */
log.SetLevel()
log.SetLogOut()
/* Print all the CLi parameters provided */
log.Log_dump_structure(cli.GetCliOptions())
log.Log_info("\n\nStarting ypgrpc-go-app...")
log.Log_info("\nCopyright (c) 2021, YumaWorks, Inc., " +
"All Rights Reserved.\n")
/* Connect the the server */
conn, res := netconfd_connect.Connect_netconfd()
if res != nil {
utils.Check_error(res)
}
/* Defer is used to ensure that a function call is performed
* later in a program’s execution, usually for purposes of cleanup.
* defer is often used where e.g. ensure and finally would be used
* in other languages.
*/
defer conn.Close()
/* Get the actual TCP host address to use for gRPC server,
* Address CLI parameter + port number,
* By default this will be:
* 127.0.0.1:50830
*/
addr := netconfd_connect.GetServerAddr()
log.Log_info("\nStarting the gRPC server ...")
log.Log_info("\nListening for client request on '%s'",
addr)
/* All initialiation is done
* Start the gRPC server to listen for clients
*/
lis, err := net.Listen("tcp", addr)
if err != nil {
log.Log_error("\nError: failed to Listen (%v)", err)
}
/* Start the gRPC server and Register all the gRPC Services */
grpcServer := grpc.NewServer()
/* Register All the Services here */
helloworld.RegisterGreeterServer(grpcServer, &helloworldServer{})
example.RegisterExampleServiceServer(grpcServer, &exampleServer{})
log.Log_info("\nypgrpc_server: Starting to serve")
if err := grpcServer.Serve(lis); err != nil {
log.Log_error("\nError: failed to Serve (%v)", err)
}
result, res := ioutil.ReadAll(conn)
utils.Check_error(res)
log.Log_info("\n%s", string(result))
log.CloseLogOut()
os.Exit(0)
} /* main */
ypgrpc-go-app Interface Functions¶
The ypgrpc-go-app application is an YControl subsystem (similar to db-api-app) that has multiple API to communicate with the netconfd-pro server.
Setup
netconfd_connect.Connect_netconfd: Initialize the YP-gRPC service with the YControl subsystem and advertise gRPC server capabilities to the netconfd-pro
Update monitoring information
netconfd_connect.Open_streams: Sends subsystem event to advertise all the gRPC available and active capabilities during the registration time
netconfd_connect.Close_streams: Sends subsystem event to advertise that the gRPC server or client stream was closed
gRPC Service Implementation¶
The ypgrpc-go-app example gRPC server has a exampleServer structure type that implements the generated ExampleService interface:
type exampleServer struct {
}
func (exampleServer) EmptyCall (ctx context.Context,
in *emptypb.Empty) (
*emptypb.Empty,
error) {
}
func (exampleServer) UnaryCall (ctx context.Context,
in *example.SimpleRequest) (
*example.SimpleResponse,
error) {
}
func (exampleServer) StreamingOutputCall (in *example.StreamingOutputCallRequest,
stream example.ExampleService_StreamingOutputCallServer) error {
}
func (exampleServer) StreamingInputCall (stream example.ExampleService_StreamingInputCallServer) error {
}
func (exampleServer) FullDuplexCall (stream example.ExampleService_FullDuplexCallServer) error {
}
func (exampleServer) HalfDuplexCall (stream example.ExampleService_HalfDuplexCallServer) error {
}
Empty RPC¶
The exampleServer implements all the service methods.
/* Empty request And Empty response RPC */
func (exampleServer) EmptyCall (ctx context.Context,
in *emptypb.Empty) (
*emptypb.Empty,
error) {
log.Log_info("\n\nEmptyRequest:")
log.Log_dump_structure(in)
/* Empty input and output.
* Run your Instrumentation here
*/
return in, nil
}
Simple RPC¶
A simple UnaryCall, which just gets a SimpleRequest from the client and returns the corresponding SimpleResponse from the server.
The method is passed a context object for the RPC and the client’s SimpleRequest protocol buffer request. It returns a SimpleResponse protocol buffer object with the response information and an error.
In this method, the SimpleResponse is set with the appropriate information, and then returned with an nil error.
/* RPC that represent single request and response
* The server returns the client payload as-is.
*/
func (exampleServer) UnaryCall (ctx context.Context,
in *example.SimpleRequest) (
*example.SimpleResponse,
error) {
log.Log_info("\n\nSimpleRequest:")
log.Log_dump_structure(in)
/* The Incoming Code must be 0 to signal that request is well formed */
if in.ResponseStatus != nil && in.ResponseStatus.Code != int32(codes.OK) {
log.Log_info("\nReceived Error Code: %v",
in.GetResponseStatus().GetCode())
return nil, status.Error(codes.Code(in.ResponseStatus.Code), "error")
}
return &example.SimpleResponse{
User: &example.User{
Id: in.GetUser().GetId(),
Name: in.GetUser().GetName(),
},
}, nil
}
Server-side Streaming RPC¶
The StreamingOutputCall is a server-side streaming RPC, which sends multiple StreamingOutputCallResponse objects to the client.
/* RPC that represent single request and a streaming response
* The server returns the payload with client desired type and sizes.
*/
func (exampleServer) StreamingOutputCall (in *example.StreamingOutputCallRequest,
stream example.ExampleService_StreamingOutputCallServer) error {
/* Example Request:
{ "response_parameters":[
{"size":10,"interval_us":5},
{"size":10,"interval_us":6}
],
"user": {"id":13,"name":"Example-1"},
"response_status":{"code":0,"message":"Example message 1"}
}
*/
log.Log_info("\n\nStreamingOutputCallRequest:")
log.Log_dump_structure(in)
rsp := &example.StreamingOutputCallResponse{
User: &example.User{},
}
/* Starting/ Closing a Server Stream. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"StreamingOutputCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"StreamingOutputCall")
count := int32(1)
for _, param := range in.ResponseParameters {
log.Log_info("\nInterval between responses: %v\n",
param.GetIntervalUs())
/* Wait as specified in the interval parameter */
time.Sleep(time.Duration(param.GetIntervalUs()) * time.Second)
if stream.Context().Err() != nil {
/* Closing a Server Stream. Update monitoring information. */
return stream.Context().Err()
}
buf := ""
for i := 0; i < int(param.GetSize()); i++ {
buf += in.GetUser().GetName()
}
count++
rsp.User.Id = count
rsp.User.Name = buf
if err := stream.Send(rsp); err != nil {
/* Closing a Server Stream. Update monitoring information. */
return err
}
}
log.Log_info("\nDone Streaming")
return nil
}
Instead of getting simple request and response objects in the method parameters, this time a request object and a special StreamingOutputCallRequest object is used to write the responses.
The server returns the aggregated size of client payload as the result.
In the method, as many StreamingOutputCallResponse objects are populated as needed, writing them to the ExampleService_StreamingOutputCallServer using its Send() method.
Finally, a nil error to indicate that the function is finished writing responses. Should any error happen in this call, the return a non-nil error; the gRPC layer will translate it into an appropriate RPC status to be sent to the client.
Client-side Streaming RPC¶
The client-side streaming method RecordRoute function is used to get a stream of StreamingInputCallRequest from the client and return a single StreamingInputCallResponse with extra information.
This method does not have a request parameter at all. Instead, it uses an ExampleService_StreamingInputCallServer stream to both read and write messages. It uses the Recv() method to receive client messages, and returns a single response using the SendAndClose() method.
In this method body the ExampleService_StreamingInputCallServer Recv() method is used to repeatedly read in the client requests to a request object (in this case a StreamingInputCallResponse) until there are no more messages.
The server needs to check the error returned from the Recv() method after each call. If nil is returned the stream is still active and it is OK to continue reading. if io.EOF is returned, the message stream has ended and the server can return its response. If it has any other value, the error is returned “as is” so that it will be translated to an RPC status by the gRPC layer.
/* RPC that represent a sequence of requests and a single responses
* The server returns the aggregated size of client payload as the result.
*/
func (exampleServer) StreamingInputCall (stream example.ExampleService_StreamingInputCallServer) error {
log.Log_info("\n\nExampleService_StreamingInputCallServer:")
log.Log_dump_structure(stream)
/* Starting/ Closing a Client Stream. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"StreamingInputCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"StreamingInputCall")
size := 0
for {
req, err := stream.Recv()
if err == io.EOF {
return stream.SendAndClose(&example.StreamingInputCallResponse{
AggregatedPayloadSize: int32(size),
})
}
size += len(req.User.Name)
if err != nil {
return err
}
}
}
Bidirectional Streaming RPC¶
The bidirectional streaming RPC FullDuplexCall() allows one request to produce multiple responses, using full duplexing.
/* RPC that represent a sequence of requests and responses
* with each request served by the server immediately.
* As one request could lead to multiple responses, this interface
* demonstrates the idea of full duplexing.
*/
func (exampleServer) FullDuplexCall (stream example.ExampleService_FullDuplexCallServer) error {
log.Log_info("\n\nExampleService_FullDuplexCallServer:")
log.Log_dump_structure(stream)
/* Starting/ Closing a Client/ Server Streams. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"FullDuplexCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"FullDuplexCall")
for {
req, err := stream.Recv()
if err == io.EOF {
return nil
}
if err != nil {
return status.Error(codes.Internal, err.Error())
}
if req.ResponseStatus != nil && req.ResponseStatus.Code != int32(codes.OK) {
return status.Error(codes.Code(req.ResponseStatus.Code), "error")
}
resp := &example.StreamingOutputCallResponse{User: &example.User{}}
for _, param := range req.ResponseParameters {
if stream.Context().Err() != nil {
return stream.Context().Err()
}
buf := ""
for i := 0; i < int(param.GetSize()); i++ {
buf += req.GetUser().GetName()
}
resp.User.Name = buf
if err := stream.Send(resp); err != nil {
return err
}
}
}
}
The ExampleService_FullDuplexCallServer stream is used to read and write messages. However, return values can be written while the client is still writing messages to the message stream.
The syntax for reading and writing here is very similar to the client-streaming method, except the server uses the Send() method rather than the SendAndClose() method. Each side will always get the other’s messages in the order they were written, and both the client and server can read and write completely independently.
The following example is also bidirectional streaming and uses the RPC HalfDuplexCall(). However, now the server buffers all the client requests and then serves them in order. A stream of responses are returned to the client when the server starts with first request. This interface demonstrates the idea of half duplexing.
/* RPC that represent a sequence of requests and responses.
* The server buffers all the client requests and then serves them in order.
* A stream of responses are returned to the client when the server starts with
* first request.
*/
func (exampleServer) HalfDuplexCall (stream example.ExampleService_HalfDuplexCallServer) error {
log.Log_info("\n\nExampleService_HalfDuplexCallServer:")
log.Log_debug_dump(stream)
/* Starting/ Closing a Client/ Server Streams. Update monitoring information. */
netconfd_connect.Open_streams("example",
"ExampleService",
"HalfDuplexCall")
defer netconfd_connect.Close_streams("example",
"ExampleService",
"HalfDuplexCall")
requests := []*example.StreamingOutputCallRequest{}
for {
req, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
return status.Error(codes.Internal, err.Error())
}
requests = append(requests, req)
}
for _, req := range requests {
resp := &example.StreamingOutputCallResponse{User: &example.User{}}
for _, param := range req.ResponseParameters {
if stream.Context().Err() != nil {
return stream.Context().Err()
}
buf := ""
for i := 0; i < int(param.GetSize()); i++ {
buf += req.GetUser().GetName()
}
resp.User.Name = buf
if err := stream.Send(resp); err != nil {
return err
}
}
}
return nil
}
Starting gRPC Server¶
Once all the methods are implemented, a gRPC server needs to be started.
The following snippet shows how to start the RouteGuide service in the ypgrpc-go-app application:
/* All initialization is done
* Start the gRPC server to listen for clients
*/
lis, err := net.Listen("tcp", addr)
if err != nil {
log.Log_error("\nError: failed to Listen (%v)", err)
}
/* Start the gRPC server and Register all the gRPC Services */
grpcServer := grpc.NewServer()
/* Register All the Services here */
helloworld.RegisterGreeterServer(grpcServer, &helloworldServer{})
example.RegisterExampleServiceServer(grpcServer, &exampleServer{})
log.Log_info("\nypgrpc_server: Starting to serve")
if err := grpcServer.Serve(lis); err != nil {
log.Log_error("\nError: failed to Serve (%v)", err)
}
gRPC server implementation consist of the following steps:
Specify the port to listen for client requests with the --port CLI parameter (default is
50830
)Create an instance of the gRPC server using grpc.NewServer(...)
Register the service implementation with the gRPC server.
Call Serve() on the server to set the port details
gRPC State Monitoring¶
The yumaworks-grpc-mon.yang module can be used to retrieve gRPC monitoring data from the netconfd-pro server.
gRPC Monitoring Example¶
Run the server with the setting --with-grpc=true flag as follows:
mydir> sudo netconfd-pro --log-level=debug4 --with-grpc=true --fileloc-fhs=true
Start the ypgrpc-go-app application in “insecure” mode for this scenario:
mydir> ypgrpc-go-app --log-level=debug --fileloc-fhs --insecure \
--protopath=$HOME/protos --proto=helloworld --proto=example
There is an option to check the gRPC server capabilities by sending <get> request to the netconfd-pro server as follows with a RESTCONF GET request:
> curl http://restconf-dev/restconf/data/grpc-state \
-H "Accept:application/yang-data+json"
The server may respond as follows:
{
"yumaworks-grpc-mon:grpc-state": {
"statistics": {
"active-server-streams": 0,
"active-client-streams": 0,
"total-active-streams": 0,
"total-closed-streams": 0
},
"server": [
{
"name": "example-grpc",
"address": "192.168.0.216",
"port": 50830,
"start-time": "2021-10-21T00:27:00Z",
"proto": [
"example"
],
"active-server-streams": 0,
"active-client-streams": 0,
"closed-streams": 0,
"services": {
"service": [
{
"name": "example.ExampleService",
"method": [
{
"name": "EmptyCall",
"client-streaming": false,
"server-streaming": false
},
{
"name": "FullDuplexCall",
"client-streaming": true,
"server-streaming": true
},
{
"name": "HalfDuplexCall",
"client-streaming": true,
"server-streaming": true
},
{
"name": "StreamingInputCall",
"client-streaming": true,
"server-streaming": false
},
{
"name": "StreamingOutputCall",
"client-streaming": false,
"server-streaming": true
},
{
"name": "UnaryCall",
"client-streaming": false,
"server-streaming": false
}
]
}
]
}
}
]
}
}
<grpc-shutdown> Operation¶
The <grpc-shutdown> operation is used to shut down the ypgrpc-go-app application.
By default, only the 'superuser' account is allowed to invoke this operation.
If permission is granted, then the netconfd-pro server will send request to shutdown the ypgrpc-go-app application and its gRPC server. All gRPC streams will be dropped, during the ypgrpc-go-app application shutdown.
<grpc-shutdown> operation
Min parameters: |
0 |
Max parameters: |
0 |
Return type: |
none |
YANG file: |
yumaworks-grpc-mon.yang |
Capabilities needed: |
none |
Mandatory Parameters:
none
Optional Parameters:
none
Returns:
none; ypgrpc-go-app will be shutdown upon success
Possible Operation Errors:
access denied
Example Request:
<?xml version="1.0" encoding="UTF-8"?>
<rpc message-id="2"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<grpc-shutdown xmlns="http://yumaworks.com/ns/yumaworks-grpc-mon"/>
</rpc>
Example Reply:
[no reply will be sent; ypgrpc-go-app will be shutdown instead.]
Netconfd-pro and ypgrpc-go-app Interaction¶
The ypgrpc-go-app application uses several messages to interact with the netconfd-pro server that described in this section.
Message Format¶
These messages are defined in the yumaworks-yp-grpc.yang YANG module. The ypgrpc-go-app payload is defined as a YANG container that augments the YControl “message-payload” container.
The following diagram illustrates the YControl messages overview:

Messages Interaction¶
The ypgrpc-go-app application contains YControl subsystem service that communicate with the netconfd-pro server. The netconfd-pro server and the ypgrpc-go-app application have the following messages interaction:
capability-ad-event: ypgrpc-go-app sends subsystem event to advertise all the gRPC available and active capabilities during the registration time
open-stream-event: ypgrpc-go-app sends subsystem event to advertise a new gRPC server or client stream(s)
close-stream-event: ypgrpc-go-app sends subsystem event to advertise that the gRPC server or client stream was closed.
Registration Message Flow¶
During the startup phase the server will initialize the yp-grpc subsystem callback functions and handlers (similar to the db-api service layer).
The connection with the server is started with a <ncx-connect> message that adds the YControl subsystem with the “example-grpc” subsystem ID to the server (agt_connect module).
YControl protocol connection parameters:
transport: netconf-aflocal
protocol: yp-grpc
<port-num> not used
Additional parameters:
subsys-id: example-grpc
The Registration message flow looks as follows:
ypgrpc-go-app to Server: register-request: the yp-grpc service registers callbacks supported by the subsystem.
Server to ypgrpc-go-app: ok: the server responds to the register request with an <ok> or an <error> message
ypgrpc-go-app to Server: capability-ad-event: sends subsystem event to advertise all the gRPC available and active capabilities during the registration time
The YP-gRPC subsystem service messages are defined in the yumaworks-yp-grpc.yang module.