go-msx
go-msx is a Go library for microservices and tools interacting with MSX.
Support
Support for go-msx and related projects is available on the #go-msx channel on the MSX slack workspace.
Versioning
Currently this library and tools are in a pre-alpha state. They are subject to backwards-incompatible changes at any time. After reaching the first stable release (v1.0.0), SemVer will be used per industry and golang best practices.
Requirements
-
Go 1.18+
-
Ensure your GOPATH is correctly set and referenced in your PATH. For example:
export GOPATH=~/go export PATH=$PATH:$GOPATH/bin -
Be sure to set your Go proxy settings correctly. For example:
go env -w GOPRIVATE=cto-github.cisco.com/NFV-BU
-
-
Git SSH configuration for
cto-github.cisco.com-
Ensure you have a registered SSH key referenced in your
~/.ssh/config:Host cto-github.cisco.com HostName cto-github.cisco.com User git IdentityFile ~/.ssh/github.keyNote that this key must be registered via the Github UI.
-
Ensure you have SSH protocol override for git HTTPS urls to our github in your
~/.gitconfig:[url "ssh://git@cto-github.cisco.com/"] insteadOf = https://cto-github.cisco.com/
-
-
Skel tool for code generation
-
Check out go-msx into your local workspace:
mkdir -p $HOME/msx && cd $HOME/msx git clone git@cto-github.cisco.com:NFV-BU/go-msx.git cd go-msx go mod download -
Install
skel:make install-skel
-
Quick Start
-
To continue working on an existing go-msx project:
- Return to the original project README instructions and continue.
-
To add go-msx to an existing module-enabled go project:
go get -u cto-github.cisco.com/NFV-BU/go-msx -
To create a new go-msx microservice skeleton project:
cd $HOME/msx skel
Documentation
Please visit our internal site or public site.
License
Copyright © 2019-2022, Cisco Systems Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Modules Overview
go-msx is composed of a number of layers and modules.
Application
app: Application lifecyclebackground: Application background errors
Platform
restops: HTTP REST endpointsstreamops: Stream channelsscheduled: Scheduled tasksaudit: Updating model audit fields and logging auditable eventsexec: Subprocess executionhttpclient: HTTP clientrbac: Role-based Access controlsecurity: Attribute-based Access controlretry: Reliabilitysanitize: Input/Output sanitizationtransit: Transit encryptionvalidate: Data validationmigrate: Database migration/sqldb/migrate: SQL database migration
populate: API population
Integration
discovery: Register and Locate microservicesconsulprovider: Consul discovery provider
stream: Communicate using streamswebservice: REST web serveradminprovider: Admin actuatoraliveprovider: Liveness actuatorapilistprovider: API list documentationasyncapiprovider: AsyncApi documentationauthprovider: Authenticationdebugprovider: Debug profilingenvprovider: Configuration actuatorhealthprovider: Health actuatoridempotency: Idempotency-Key filterinfoprovider: Info actuatorloggersprovider: Logging actuatormaintenanceprovider: Maintenance actuatormetricsprovider: Metrics actuatorprometheusprovider: Prometheus statsswaggerprovider: Swagger documentation
cli: Command line interactionhealth: Health checksconsulcheck: Consul health checkkafkacheck: Kafka health checkredischeck: Redis health checksqldbcheck: SQL health checkvaultcheck: Vault health check
integration: REST API clientcache: Cachinglru: In-Memory cache provider/redis/cache: Redis cache provider
operations: Operations supportschema: Schema documentationasyncapi: AsyncApi schema documentationjs: JSON schema documentationopenapi: OpenApi schema documentationswagger: Swagger schema documentation
leader: Leader electionconsulprovider: Consul leader provider
certificate: Certificate management
Infrastructure
consul: Consul drivervault: Vault driverredis: Redis driversqldb: SQL database driverkafka: Kafka drivertrace/datadog: Datadog tracingtrace/jaeger: Jaeger tracing
Core
config: Configurationlog: Loggingtrace: Tracingstats: Statisticsfs: Filesystemsresources: Resourcestypes: Reusable data types
Continuous Integration
build: Build execution
MSX Logging Module
MSX logging is an extension of the popular logrus logging library, to include:
- Log names
- Level-specific loggers
- Improved context handling
Usage
After importing the MSX log package, you can use the default named logger msx simply:
import "cto-github.cisco.com/NFV-BU/go-msx/log"
var logger = log.StandardLogger()
func main() {
var action = "started"
logger.Infof("Something happened: %s", action)
}
To use a logger with a custom name:
var logger = log.NewLogger("alert.api")
To create a logger named after the current module:
var logger = log.NewPackageLogger()
To create a levelled logger, which outputs print at the defined log level:
debugLogger := logger.Level(log.DebugLevel)
debugLogger.Printf("Some template: %s", "inserted")
To record a golang error object:
func DeployResource(data []byte) {
var body ResourceDeployment
if err := json.Unmarshal(data, &body); err != nil {
logger.
WithError(err).
Error("Failed to parse Resource Deployment request")
}
}
To use the log context that was embedded in a Context object:
func HandleRequest(ctx context.Context) {
requestLogger := logger.WithContext(ctx)
...
}
To add one-time custom diagnostic fields:
var logger = log.NewLogger("tenant")
func HandleGetTenantRequest(tenantId string) {
logger.
WithExtendedField("tenantId", tenantId).
Debug("Tenant retrieval requested")
}
To create a sub-logger with custom diagnostic fields:
var logger = log.NewLogger("services.tenant")
func HandleGetTenantRequest(tenantId string) {
requestLogger := logger.WithExtendedField("tenantId", tenantId)
requestLogger.Debugf("some message")
}
Configuration
Logging Levels
MSX Logging defines the following log levels:
- Trace
- Debug
- Info
- Warn
- Error
- Panic
- Fatal
A logging level filter can be set globally:
log.SetLevel(log.WarnLevelName)
This will ensure all loggers not configured at a more strict level only output messages with a level of WARN or above.
An individual logger (and its sub-loggers) can be set to a minimum level:
logger = log.NewLogger("msx.beats")
logger.SetLevel(log.LevelByName(log.InfoLevelName)))
Configuration (eg. command line options) can be used to set a logger minimum level:
myapp --logger.msx.beats=debug
This will set the minimum level of the msx.beats logger tree to DEBUG after
the application configuration has been loaded.
Output Format
Output can be switched to JSON formatting:
log.SetFormat(log.LogFormatJson)
And back to LogFmt formatting:
log.SetFormat(log.LogFormatLogFmt)
By default, all output is sent to standard output, with high-resolution timestamps. See init.go for specifics.
Errors
Go has a built-in error interface to be implemented by error
models.
go-msx has chosen to use the github.com/pkg/errors
module to implement errors. This custom error module enables collecting
stack traces, critical for logging and debugging of errors.
When instantiating or wrapping an error,
use this package instead of the standard library errors package.
import (
"context"
"github.com/pkg/errors"
)
// Create a globally visible error
var MyStaticError = errors.New("Static error occurred")
var MyOtherError = errors.New("Other error occurred")
// Return the global error
func mine(ctx context.Context) error {
return MyStaticError
}
// Wrap the error into your own domain
func yours(ctx context.Context) error {
return errors.Wrap(mine(ctx), "Something bad happened")
}
func callYours(ctx context.Context) error {
err := yours(ctx)
if errors.Is(err, MyStaticError) {
// Special handling for this error type
} else {
// General hanlding for any other error types
return err
}
return nil
}
The above example shows how to create a global error, and how to re-contextualize (wrap) inside the parent.
Composition
Composite errors implement the CompositeError interface:
type CompositeError interface {
Errors() interface{}
}
go-msx provides two composite error models: ErrorMap and ErrorList.
Each of these represents a set of errors.
-
ErrorMap: Represents a set of key-error pairs, intended to map to sub-parts of a structured parent component.return types.ErrorMap{ "element1": validation.Validate(&element1, validation.Required) "element2": validation.Validate(&element2, validation.MinLength(1)) } -
ErrorList: Represents a series of error instances (or nils), intended to map to elements in a parent sequence.return types.ErrorList{ validation.Validate(&parent[0], validation.Required) validation.Validate(&parent[1], validation.MinLength(1)) }
The above error models also implement Filterable:
type Filterable interface {
Filter() error
}
This allows the composite error to collect non-error (nil) values, which will
be removed from the return value of Filter(). This feature is used by the
validate package during DTO validation.
Log Customization
To enable attaching custom log fields from your error, the logging subsystem checks if your error implements the LogFielder interface:
type LogFielder interface {
LogFields() map[string]any
}
Any fields returned by the LogFields() function will be added
as log fields if the error is output to the log via WithError().
MSX Configuration Module
MSX configuration is a spring-compatible dynamic configuration library. It includes support for:
- remote configuration stores
- dynamic configuration updates
- JSON, JSON5, YAML, INI and Properties files
- key normalization
- structure population
Model
MSX Configuration has three main components: providers, settings, and the config object.
- Providers load settings for your application. This could be from a file, environment variables, or some other source of configuration.
- Settings represent the configuration options for your application. Settings are represented as key/value pairs.
- Config holds all of the providers and loaded settings. This object allows you to load, watch, retrieve, apply and convert your settings.
Each provider combines its contents together into a single keyspace; these are then superposed (like a "1000-layer lasagna") in order to produce a final combined keyspace and key/value mapping.
Quick Start
Instantiation
When using MSX Configuration inside the MSX Application context, you can retrieve the configuration object from the ctx context.Context:
cfg := config.MustFromContext(ctx)
When using MSX Configuration outside of the MSX Application context, you can instantiate your own providers. For example, to consume the environment variables from the current process:
environmentProvider := config.NewEnvironment("env")
cfg := config.NewConfig(environmentProvider)
Value Retrieval
Using one of the above cfg objects, you can retrieve the user's home directory from the HOME environment variable. Note that all config keys are normalized to be lowercase, no hyphens, period-separated. This means the HOME environment variable will be mapped to home:
homePath, err := cfg.String("home")
The cfg object presents a number of functions to return a strongly-typed value:
String(key string)Int(key string)Float(key string)Bool(key string)
These functions will look up the specified key in the configuration, and if found, will attempt to convert the value to the specified type. If the key is not found or configuration has not yet been loaded, an appropriate error will be returned.
If you wish to use an alternative (default) value in case the lookup fails, you can use the Or functions:
StringOr(key string, other string)Int(key string, other int)Float(key string, other float)Bool(key string, other bool)
The specified other value will be returned if the config has been loaded, but lookup fails:
buildPath, err := cfg.StringOr("build.path", "./build")
Structure Population
You can also populate appropriately defined structures:
type ConnectionConfig struct {
Name string
Skipped bool `config:"-"`
AnotherName int `config:"somethingelse"`
}
var connectionConfig ConnectionConfig
err := cfg.Populate(&connectionConfig, "some.connection")
Each structure field is treated a little differently based on the contents/existence of the config struct tag:
Name: populated fromsome.connection.name(default behaviour)Skipped: not populated due to theconfig:"-"(omit when source name is a hyphen)AnotherName: populated fromsome.connection.somethingelse(overridden field name)
Spring Compatibility
One of the primary goals for MSX Configuration is close compatibility with Spring-style configuration. Several known incompatibilities and limitations currently exist:
- Key Normalization
- Configuration keys in MSX Configuration are simply normalized to be lowercase, no hyphens, period-separated. As of Spring 2.0, configuration keys are expected to be snake-case, period-separated. MSX Configuration cannot distinguish between the
app.some-dataandapp.somedatakeys, and normalizes them both toapp.somedata.
- Configuration keys in MSX Configuration are simply normalized to be lowercase, no hyphens, period-separated. As of Spring 2.0, configuration keys are expected to be snake-case, period-separated. MSX Configuration cannot distinguish between the
- Arbitrary Population
- MSX Configuration currently supports
@ConfigurationPropertiesstyle structure population. As a consequence, all data used to populate a structure must be direct descendants of the key used to populate the structure. We intend to support arbitrary key specification for structures in the future.
- MSX Configuration currently supports
Built-In Providers
MSX Configuration has many built-in providers, allowing the application to unify configuration from a wide variety of sources:
INIFile- Loads settings from a.inifileJSONFile- Loads settings from a.jsonor.json5fileYAMLFile- Loads settings from a.yamlor.ymlfileTOMLFile- Loads settings from a.tomlfilePropertiesFile- Loads settings from a.propertiesfileCobraProvider- Loads settings from a Cobra command contextPFlagProvider- Loads settings from PFlag flagsetGoFlagProvider- Loads settings from a go flag flagsetConsulProvider- Loads settings from ConsulVaultProvider- Loads settings from VaultEnvironment- Loads settings from environment variablesStatic- Loads settings from an in-memory map
Helpers
Along with the above providers, there are some wrappers for managing config lifecycle:
CachedLoader- Caches settings in memory until flushed byInvalidate()OnceLoader- Caches settings in memory permanentlyResolver- Remaps settings from one key to another
Consul Configuration Provider
The Consul config provider reads settings from the KV version 1 consul plugin. It currently supports two separate read paths: default and service-specific. These read paths are expected to exist directly under the KV mount point.
The provider will, by default, load KV settings from the following locations:
userviceconfiguration/defaultapplication- default settingsuserviceconfiguration/${info.app.name}- service-specific settings
Provider Configuration
| Key | Default | Required | Description |
|---|---|---|---|
| spring.cloud.consul.config.enabled | false | Optional | Enable loading configuration from consul KV |
| spring.cloud.consul.config.disconnected | false | Optional | Activate "disconnected" mode for CLI commands |
| spring.cloud.consul.config.prefix | userviceconfiguration | Optional | Consul KV mount point |
| spring.cloud.consul.config.default-context | defaultapplication | Optional | KV folder path under mount point containing global settings |
| spring.cloud.consul.config.pool | false | Optional | Pool consul connections |
| spring.cloud.consul.config.delay | 3s | Optional | Retry delay after KV setting retrieval failure |
| spring.cloud.consul.config.required | - ${spring.cloud.consul.config.prefix}/${spring.cloud.consul.config.default-context} | Optional | KV settings paths that must return KV values |
MSX Application Module
MSX Application is a simple state machine for managing application lifecycle. It installs observers to configure and instantiate the standard components for use with MSX applications. This includes listeners for external events to advance the state machine (e.g. POSIX signals, configuration changes).
Lifecycle Events
MSX Application defines various lifecycle events:
app.EventCommand- mode selection based on CLI sub-commandsapp.EventInit- pre-configure applicationapp.EventConfigure- application and component configurationapp.EventStart- start services for consumersapp.EventReady- application fully initialized and ready to service requestsapp.EventRefresh- update configuration after changeapp.EventStop- stop services for consumersapp.EventFinalize- pre-termination cleanup
Each event (except app.EventCommand) proceeds with three phases:
app.PhaseBefore- earlyapp.PhaseDuring- normalapp.PhaseAfter- late
The app.EventCommand event will be execute with the phase containing the command being executed. The following commands are pre-defined:
app.CommandRoot- Root (default)app.CommandMigrate- Migrateapp.CommandPopulate- Populate
Event Observers
When a lifecycle event phase is occurring, the MSX Application will call each of the Observers registered for the event phase. These callbacks can be registered during any previous lifecycle event callback or during your module init().
For example, to call the addWebService observer during start.before for all commands:
func init() {
app.OnEvent(app.EventStart, app.PhaseBefore, addWebService)
}
To see an example showing command-specific event observers, see Commanding, below.
Short-Circuiting
Sometimes an application is not able to correctly execute a lifecycle phase, or receives an external interruption. This will result in a short-circuit of the lifecycle. If an error is returned from one of the observers in the following phases, the lifecycle will move to the specified phase:
app.EventInit=>app.EventFinalizeapp.EventConfigure=>app.EventFinalizeapp.EventStart=>app.EventStopapp.EventReady=>app.EventStop
Application Observers
Command
The app.EventCommand events are the first events fired during startup. They provide the opportunity to execute custom logic and register event observers specific to the command.
As above, the app.EventCommand event will be executed with the phase containing the command being executed. For example the phase could be one of the default commands:
- Root (
app.CommandRoot) - Migrate (
app.CommandMigrate) - Populate (
app.CommandPopulate)
To add a new command:
func main() {
if _, err := app.AddCommand("token", "Create OAuth2 token", renew, app.Noop); err != nil {
cli.Fatal(err)
}
}
To configure event observers in response to a specific command being executed:
func init() {
app.OnEvent(app.EventCommand, app.CommandRoot, func(ctx context.Context) error {
app.OnEvent(app.EventStart, app.PhaseBefore, addWebService)
return nil
})
}
Init
The app.EventInit events are fired second, after the app.EventCommand events.
Observers attached to the app.EventInit events should be restricted to modifying the application environment. This includes registering custom config providers or custom context injectors.
Configure
The app.EventConfigure events are fired third during startup, after the app.EventInit events.
By default, the application is configured:
app.PhaseBefore- Register remote config providers
app.PhaseDuringapp.PhaseAfter- HTTP Client
- Consul connection pool
- Vault connection pool
- Cassandra connection pool
- Redis connection pool
- Kafka connection pool
- Web server
- Create Cassandra Keyspace
Typically, user applications will not register new event handlers for the app.EventConfigure events.
Start
The app.EventStart events are fired fourth during startup, after the app.EventConfigure events.
By default, application infrastructure is connected:
app.PhaseBefore:- Authentication Providers
- Spring Actuators
- Swagger
- Prometheus Actuator
- Stats Pusher
app.PhaseAfter:- Health logging
- Stream Router
- Web Server
- Config Watcher
Custom application startup code is expected to run inside the app.PhaseDuring phase. This would include starting any long-running services or scheduling background tasks.
Ready
The app.EventReady events are fired fifth during startup, after the app.EventStart events.
By default, application ready observers are executed:
app.PhaseBefore:- Service Registration (consul)
app.PhaseAfter:- Command Execution (sub-commands)
Refresh
TBD
Stop
The app.EventStop events are fired first during shutdown.
By default, application services are stopped and infrastructure and disconnected:
app.PhaseBefore:- Service De-Registration (consul)
- Health logging
- Stream router
- Web Server
- Stats Pusher
Any custom application code running in the background should be shutdown during app.PhaseDuring.
Finalize
The app.EventFinal events are fired last during shutdown.
By default, tracing is stopped during app.PhaseAfter to allow trace collection to include app.EventStop.
Configuration Loading
In response to the app.EventConfigure event, MSX Application combines all registered sources of configuration. This occurs in three phases:
- Phase 1 - In-Memory
- Application Static Defaults
- Environment Variables
- Application Runtime Overrides
- Command Line
- Phase 2 - Filesystem
- Defaults Files
- Bootstrap Files
- Application Files
- Profile Files
- Build Files
- Phase 3 - Remote
- Consul
- Vault
Note that this loading order is not the same as the order of precendence for calculating values:
- Application Static Defaults
- Defaults Files
- Bootstrap Files
- Application Files
- Build Files
- Consul
- Vault
- Profile Files
- Environment Variables
- Command Line
- Application Runtime Overrides
MSX Dependencies
In large applications, inter-object dependency management becomes more challenging. Within the go standard library, the Context object is provided to share dependencies and cancellation. This simplifies writing unit tests, since dependencies can be injected via the context.
MSX Application provides a Context object to event Observers so they may inject new dependencies for their subsystems. The context object also carries Trace information for logging and trace publishing.
By default, the following dependencies are added to the MSX Application context:
- Configuration
- Cockroach client pool
- Consul client pool
- Vault client pool
- Redis client pool
- Kafka client pool
- Http client factory
During migrate execution, the Migration Manifest is also available from the context.
Accessing Dependencies
Each substitutable component in go-msx requires a context accessor to allow injecting and inspecting overrides:
type contextKeyNamed string
func ContextDomainService() types.ContextAccessor[DomainService] {
return types.NewContextAccessor[DomainService](contextKeyNamed("DomainService"))
}
Key to type safety is the external invisibility of the context key. This is guaranteed
by defined a module-local type (contextKey or contextKeyNamed) and using an instance of it
to index the context inspection/injection.
To inject your custom dependency to the current context:
ctx = domain.ContextDomainService().Set(ctx, domainService)
To retrieve a dependency from the current context:
domainServiceApi := domain.ContextDomainService().Get(ctx)
or:
domainServiceApi, ok := domain.ContextDomainService().TryGet(ctx)
Logging and Tracing
To apply logging and tracing fields from the current context:
myLogger.WithContext(ctx).Info("My log message")
MSX Statistics
MSX Statistics allows applications to monitor and record application metrics for display on dashboards and generation of application alarms. We have chosen to support Prometheus and its OpenMetrics format to expose collected data.
Statistics Types
MSX Statistics supports several base data collection types:
-
Counter
A counter is an ever-increasing number. For example, "Completed Requests" will continuously increase throughout the lifetime of the application. It is initialized to zero on application startup.
-
Gauge
A gauge is a metric that represents a single numerical value that can arbitrarily go up and down. For example, "Active Requests" increases when a new request arrives, and decreases when a request is fully serviced.
-
Histogram
A histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values. For example, "Query Duration" has a range of time values (from 0 seconds and up). These can be put into buckets to see what the 99th percentile Query Duration is (using the prometheus
histogram_quantilefunction in the dashboard).
If you wish to further group the data, you can use the Vector version of each of the above types. For example, we can group a "Request Duration Histogram" by API endpoint, in order to see the distribution of request duration distributions for each endpoint separated from other endpoints.
Usage
Instantiation
To start collecting a statistic, you must first initialize its collector. This can be accomplished during module initialization by assigning the collector to a module-global variable:
const (
statsSubsystemConsul = "consul"
statsHistogramConsulCallTime = "call_time"
statsGaugeConsulCalls = "calls"
statsCounterConsulCallErrors = "call_errors"
statsGaugeConsulRegisteredServices = "registrations"
)
var (
// Collect the number of errors for each api
countVecConsulCallErrors = stats.NewCounterVec(
statsSubsystemConsul,
statsCounterConsulCallErrors,
"api", "param")
// Collect the number of active requests for each api
gaugeVecConsulCalls = stats.NewGaugeVec(
statsSubsystemConsul,
statsGaugeConsulCalls,
"api", "param")
// Collect the distribution of call execution times for each api
histVecConsulCallTime = stats.NewHistogramVec(
statsSubsystemConsul,
statsHistogramConsulCallTime,
nil,
"api", "param")
)
As you can see above, each of the collector constructors start with two required arguments:
-
Subsystem
Identifies the application subsystem being monitored. In this case,
consul. -
Metric Name
Identifies the individual metric dimension. By convention, duration histograms end with
_time, and counters are pluralized.
The histogram (and histogram vector) constructors require an argument specifying the buckets and their upper limits. To use the default buckets, pass nil for this argument. The current default buckets are calculated by executing prometheus.ExponentialBuckets(10, 2, 16): this evaluates to [10, 20, 40, ..., 655360]. For more information about histograms, you can visit the Prometheus documentation.
Vector constructors, as shown above, accept a final series of dimensions to be applied to each of the measurements. In the example above, each of our vectors accepts the api and param groupings. In the consul stats collector:
apiidentifies which Consul API endpoint is being called (by path)paramidentifies eg. the servicename for discovery
Collection
After initializing your collectors, you can start to measure your application as the relevant events occur.
A common pattern is define a wrapper function whose only purpose is to collect statistics. In the Consul package, we can see an example of this:
func observeConsulCall(api, param string, fn func() error) (err error) {
// Collect the start time of the call
start := time.Now()
// Increase the number of active calls
gaugeVecConsulCalls.WithLabelValues(api, param).Inc()
// Execute this code before returning, even in case of panic()
defer func() {
// Reduce the number of active calls
gaugeVecConsulCalls.WithLabelValues(api, param).Dec()
// Bucket the call duration in the histogram
histVecConsulCallTime.WithLabelValues(api, param).Observe(
float64(time.Since(start)) / float64(time.Millisecond))
if err != nil {
// Increase the error count if an error was returned from fn
countVecConsulCallErrors.WithLabelValues(api, param).Inc()
}
}()
// Call the wrapped function and intercept it's error return value
err = fn()
// Return the wrapped function's value, after the defer block
return err
}
There are a few things to note here not covered in the inline comments:
- We directly pass
apiandparamgroup values to each of the vectors from the wrapper using.WithLabelValues(). These must be passed in the same order as in the constructor. - Time periods should be calculated as
float64milliseconds. - Counters and Gauges can be incremented by
1.0using the.Inc()method. - Gauges can be decremented by
1.0using the.Dec()method. - Histograms can record an observation using the
.Observe()method.
Push Gateway
By default, the MSX Statistics package expects the statistics to be polled by an external application. If such a poller is not available, MSX Statistics can be configured to push to an external Prometheus push gateway.
Configuration
The following configuration settings can be specified to configure the stats pusher:
| Key | Description | Default |
|---|---|---|
stats.push.enabled | enable the stats pusher | false |
stats.push.url | url to push stats too | |
stats.push.job-name | prometheus job name to send | go_msx |
stats.push.frequency | duration between pushes | 15s |
Distributed Tracing
MSX Distributed Tracing allows the collection of an operational flow graph. Based on OpenTracing, tracing helps pinpoint where failures occur and what causes poor performance.
Model
-
Span
A span is a named, timed operation representing a piece of the operational flow. Spans can have parents and children.
-
Trace
A trace is the complete tree of spans from an entire operational flow. A new trace (with a new root span) is created by input from an external system, such as a REST API client. Traces extend across synchronous and asynchronous message flows (interal RPC and events).
Usage
The most common usage of tracing is to create a new child span within the current span, and execute an operation inside
it. To facilitate this, you can use the trace.Operation() function:
err := trace.Operation(ctx, "myChildOperation", func(ctx context.Context) error {
myLogger.WithContext(ctx).Info("Inside myChildOperation...")
return nil
})
To create a new child span and attach data to it, you can use the trace.NewSpan() function:
// Create the new span
ctx, span := trace.NewSpan(ctx, spanName)
defer span.Finish()
// Tag the operation name
span.SetTag(trace.FieldOperation, operationName)
// Execute the operation and record the result
if err := myOperation(); err != nil {
span.LogFields(trace.Status("ERROR"), trace.Error(err))
} else {
span.LogFields(trace.Status("OK"))
}
Common trace log tags include:
trace.FieldOperation: Generic operation nametrace.FieldStatus: Terminal status of the operationtrace.FieldHttpCode: Response status codetrace.FieldHttpUrl: Request urltrace.FieldHttpMethod: Request method
Other tags can be defined as needed using simple period-separated strings (e.g. grpc.response.code).
Advanced Usage
When writing a new driver for external input (such as a new RPC transport listener), you can retrieve the untraced context:
ctx = trace.UntracedContextFromContext(ctx)
This context object should be passed to the input handlers, who will be responsible for starting a new (root) span:
err := trace.Operation(ctx, "myInputReceiver", myInputHandler)
Configuration
By default, MSX tracing will send trace data to a Jaeger listener at udp://localhost:6831.
The following configuration settings can be specified to override the default behaviour:
| Key | Description | Default |
|---|---|---|
trace.service-name | name of service to supply with the trace | ${info.app.name} |
trace.service-version | version of service to supply with the trace | ${info.app.name} |
trace.collector | which collector to use, jaeger,datadog | jaeger |
trace.reporter.enabled | report distributed tracing data | false |
trace.reporter.host | jaeger/datadog host | localhost |
trace.reporter.port | jaeger/datadog port | 6831 |
trace.reporter.url | zipkin url | http://localhost:9411/api/v1/spans |
Datadog
To configure for datadog, set the following values in consul:
trace.collector: datadog
trace.reporter.enabled: true
trace.reporter.port: 8126
and in the kubernetes manifest:
env:
- name: TRACE_REPORTER_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
That will send traces to the collector on the same host.
Operations
In the following sections, we will explore mechanisms for reusing and augmenting code using Functions, Actions, Operations, Decorators, Filters, and Handlers.
Functions
In go, functions are the lowest level of reusable execution. They accept a specific set of arguments and return a set of values.
func add(number1, number2 int) int {
return number1 + number2
}
Function Types
Go functions are first-class, meaning they can be passed around as values, including as parameters to other functions, or return values from functions. This enables powerful code composition and reuse. This is also a key feature of the functional style of programming:
In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner.
In go, to accept a function as an argument, you can declare a function type, and use it to declare the receiving parameter:
type unaryOperator func (int) int
type binaryOperator func(int, int) int
func evaluateUnaryExpression(operand int, operator unaryOperator) int {
return unaryOperator(operand)
}
func evaluateBinaryExpression(leftOperand, rightOperand int, operator binaryOperator) int {
return binaryOperator(leftOperatnd, rightOperand)
}
Actions
go-msx defines an ActionFunc type to describe an executable function (Action) signature:
type ActionFunc func(ctx context.Context) error
An ActionFunc accepts a single Context argument (to allow access to dependencies and operation-scoped data),
and returns a single error value indicating success (nil) or failure (non-nil). As described
above, this enables you to pass around these functions and abstractly re-use them:
// Send a message to the ANSWER_TOPIC channel
func deepThought(ctx context.Context) error {
return stream.PublishObject('ANSWER_TOPIC', map[string]any{
"answer": 42,
})
}
// Call the deepThought function when the application is running
func init() {
app.OnEvent(app.EventRun, app.PhaseDuring, deepThought)
}
In this example, we register an application event observer Action to be executed when the application has finished startup. The Action sends a simple message to a stream.
Operations
To simplify reusing code to work with Actions, go-msx has an Operation type:
type Operation struct {...}
func (o Operation) Run(ctx context.Context) error {...}
Operations provide a Run method to execute the operation, along with other methods
to create derived Operations using Filters and Decorators. These will be discussed in the
next section.
Middleware
To augment the functionality of Actions, Operations accept a series of Decorators and Filters. These follow the Middleware (or Mediator) pattern:
Middleware is software that's assembled into an app pipeline to handle requests and responses. Each component:
- chooses whether to pass execution to the next component in the pipeline.
- can perform work before and after the next component in the pipeline.
-- ASP.NET Core Middleware, Microsoft
For example, when receiving an incoming web request, we use Middleware to pass the incoming request and outgoing response through a series of stages:
stateDiagram
Router --> Logging : Request
Logging --> Authentication : Request
Authentication --> Authorization : Request
Authorization --> Handler : Request
Handler --> Authorization : Response
Authorization --> Authentication : Response
Authentication --> Logging : Response
Logging --> Router : Response
If the Authentication filter detects a request does not carry credentials, it can reject it immediately and return to the Logging Middleware without executing future stages in the pipeline.
Later, when the Authorization filter detects the logged-in user (determined by the credentials) does not have permissions to access the requested operation, it can also reject it immediately and return up the Middleware chain.
Operations support two kinds of Middleware, Decorators and Filters, described in the following sections.
Decorators
A Decorator is a Middleware implementation called using the same signature as an Action:
action := func(context.Context) error {...}
err := action(ctx)
decoratedAction := MyDecorator(action)
err := decoratedAction(ctx)
As described in the previous section, Middleware (such as a Decorator) can perform checks to decide whether to pass control to the next Action (possibly another Middleware instance) in the chain, or return control to its caller.
A Decorator can also inject values into the Context to provide later Actions access to calculated
data; this is the mechanism used by the Authentication filter to inject the User Context into
the current Operation.
Finally, a Decorator can handle, modify or wrap the returned error value. This allows
error values to be logged, mapped from one domain to another, or augmented with extra
information.
Usage
To apply a Decorator to an Operation, use the WithDecorator method to return a new, derived operation:
err := types.NewOperation(myAction).
WithDecorator(NewLoggingDecorator(logger)).
Run(ctx)
When adding Decorators to Operations, they wrap the inner action as they are applied. This means that the first applied decorator will be executed right before the Action on the way into the Middleware pipeline, and will be executed right after the Action on the way out of the Middleware pipeline.
For example:
err := types.NewOperation(myAction).
WithDecorator(NewLoggingDecorator(logger)).
WithDecorator(NewStatsDecorator(logger)).
Run(ctx)
On execution of the Operation's Run method:
stateDiagram
Operation --> Stats : (1) Call
Stats --> Logging : (2) Call
Logging --> myAction : (3) Call
myAction --> Logging : (4) Return
Logging --> Stats : (5) Return
Stats --> Operation : (6) Return
- Operation calls the Stats Decorator
Runmethod - Stats calls the Logging Decorator
Runmethod - Logging calls the
myActionfunction myActionreturns to the Logging Decorator- Logging returns to the Stats Decorator
- Stats returns to the Operation
Implementation
A Decorator wraps an inner ActionFunc and returns a new ActionFunc.
type ActionFuncDecorator func(action ActionFunc) ActionFunc
Stateless
Static/Stateless decorators (those which do not have runtime-specified dependencies) can be implemented quite simply, as they do not require a factory:
// Static (stateless) decorator
func Escalate(action ActionFunc) ActionFunc {
return func(ctx context.Context) error {
return service.WithSystemContext(ctx, action)
}
}
Stateful Closure
Stateful Decorators are often implemented using a closure-based factory, communicating state via lexical scope:
// Factory for LoggingDecorator, accepting the dependencies,
// and returning a new decorator
func NewLoggingDecorator(logger types.Logger) ActionFuncDecorator {
// Return the decorator, which can be applied to an Operation
return func(action ActionFunc) ActionFunc {
// Return the implementation of the decorator
return func(ctx context.Context) error {
// Call the original action
err := action(ctx)
// Log the error message
if err != nil {
logger.WithContext(ctx).WithError(err).Error("Action failed")
}
// Log or handle errors, never both
return nil
}
}
}
Stateful Struct
A struct-backed Decorator object may look slightly different since it uses an object to communicate state:
type StatsCounterDecorator struct {
counterName string
action ActionFunc
}
func (d StatsCountDecorator) Run(ctx context.Context) error {
stats.IncrementCounter(d.counterName)
defer stats.DecrementCounter(d.counterName)
return d.action(ctx)
}
func NewStatsCountDecorator(counterName string) ActionFuncDecorator {
return func(action ActionFunc) ActionFunc {
return StatsCounterDecorator{
counterName: counterName,
action: action,
}.Run
}
}
Next, we will look at Filters, which allow Decorators to be applied in a pre-determined order independent of the Operation instance definition.
Filters
When reusing multiple Middleware types for a given Operation, it may be important that some middleware instances are consistently applied before others.
For example, our TokenFilter is applied to our HTTP pipeline to extract the token from Request and inject it into the Context. This must occur before our AuthenticationFilter checks the token in the Context and verifies the user is properly authenticated. Since we can swap in CertificateFilter for TokenFilter (when using Certificate Authentication), it important that the Middleware not be coupled, and always be applied in the correct order.
Decorators do not directly allow application in middleware-specified order. This presents many problems when using factories to generate higher level Operation abstractions such as Endpoints and Message Subscribers. These factories do not need to know about the variety or application order of these Decorators, especially when they are mixed with framework-specified Middleware.
To enable this scenario, go-msx offers Filters. Filters allow you to ensure the correct ordering of Middleware when passed along from other components without requiring tight coupling or specialization.
Usage
To apply a filter to an Operation, use the WithFilter method:
err := types.NewOperation(myAction).
WithFilter(NewLoggingFilter(logger)).
Run(ctx)
Order
Filter Order can be though of as priority: a higher number means it will be applied earlier to the target Action.
For example, if Filter A has an order of 0 and Filter B has an order of 100, then
Filter B will be applied first (executed second inbound, first outbound), and Filter A will
be applied second (executed first inbound, second outbound).
Note that when combining Operation instances using the Operation.Run method, filters are
only ordered relative to the other Filters and Decorators on the Operation to which it was directly
applied.
Implementation
A Filter is envisioned as a simple wrapper around a Decorator which also provides a method to inspect the order that it should be applied:
// ActionFilter is an ordered Decorator
type ActionFilter interface {
Order() int
Decorator() ActionFuncDecorator
}
When authoring a Filter, any type that implements ActionFilter can be used.
You can even transform a Decorator into a Filter using a simple factory:
recoveryDecorator := NewRecoveryDecorator()
filter := types.NewOrderedDecorator(100, recoveryDecorator)
Traced Operations
The Tracing module has some convenience methods which allow you to create traced operations, along with the ability to execute them in the foreground or background.
Factories
The Tracing module defines two Operation factories which can create named and traced Operations.
NewOperation
To create an Operation which has Distributed Tracing enabled, use the NewOperation
factory.
For example:
op := trace.NewOperation("multipleTwoNumbers", multiplyTwoNumbers)
This will create an Operation with two decorators:
- SpanDecorator: Records an operation and its outcome in the distributed trace. If there is a trace in progress, a new child span will be created inside the current span.
- RecoverLogDecorator: If your Action panics, this will stop propagation and log the details.
You can add further Middleware to the returned Operation, or consume it as-is.
NewIsolatedOperation
The create an operation which has Distributed Tracing enabled, but is not part of the
current trace, use the NewIsolatedOperation factory.
For example:
op := trace.NewIsolatedOperation("multipleTwoNumbers", multiplyTwoNumbers)
This uses NewOperation above, and then applies the following decorator:
- UntracedContextDecorator: Removes reference to the current span before a new span is created. This has the effect of starting a new trace, completely independent of the calling context.
Execution
The Tracing module defines two Operation executors which can create and then execute Traced Operations.
ForegroundOperation
ForegroundOperation executes the action inside a new, isolated trace:
err := trace.ForegroundOperation(ctx, "simple", mySimpleAction)
BackgroundOperation
BackgroundOperation executes an action inside a background goroutine, using a new, isolated trace:
trace.BackgroundOperation(ctx, "simple", mySimpleAction)
This call does not offer persistence, cancellation, or restartability, so should not be used for job execution or management.
Ports
Ports describe the format of an incoming or outgoing stream or HTTP endpoint.
They are used to automatically serialize and deserialize communications to a convenient
structure for your application.
The term "Ports" is taken from Hexagonal Architecture, where it used to describe
... dedicated interfaces to communicate with the outside world. They allow the entry or exiting of data to and from the application.
-- Hexagonal Architecture, Medium
go-msx uses two types of ports, with many overlapping options:
- Input Port: describes an HTTP request or incoming Stream Message
- Output Port: describes an HTTP response or outgoing Stream Message
Declaration
Ports are defined using go structures, consisting of a series of fields. Each field consists of three parts:
-
Name: The name by which you can access the struct member in go code.
-
Type: The type of the field to which the data will be converted. These types fall into one of a few categories, to simplify conversion:
- Scalar: Any simple single-valued (eg string, int, uuid, bool)
- Array: A sequence of scalars
- Object: A dictionary of scalars with string keys
- File: An uploaded file
- FileArray: A sequence of uploaded files
-
Tags: A set of annotations of the struct field, describing attributes like source/destination, index, validation, optionality, etc.
Example
type outputs struct {
Code int `resp:"code"`
Body api.Response `resp:"body"`
Error api.Error `resp:"body" error:"true"`
}
Input Ports
An Input Port is a go structure used to describe a source payload to be parsed, such as an HTTP Request or an incoming Stream Message.
Upon receipt of an incoming payload, go-msx will populate your data structure, execute validation, and if it is valid, pass it to your handler, such as an HTTP Controller Endpoint.
An example stream message input port:
type driftCheckResponseInputs struct {
EventType string `in:"header" const:"DriftCheck"`
Payload api.DriftCheckResponse `in:"body"`
}
An example HTTP request input port:
type createEntityInputs struct {
ControlPlaneId types.UUID `req:"path"`
Payload api.CreateEntityRequest `req:"body"`
}
Struct Tags
Each field with an input struct tag will be automatically populated before being passed to your handler.
Note that the struct tag prefix depends on the protocol being described:
- For HTTP Requests, the input struct tag must be
req(for backwards compatibility) - For Stream Messages, the input struct tag must be
in.
The full syntax of the input struct tag is one of the following, appropriate for the handling protocol:
in:"<fieldGroup>[=<peerName>]"
req:"<fieldGroup>[=<peerName>]"
The input struct tag contains the following subcomponents:
-
<fieldGroup>(Required): The name of the message/request part from which the value will be extracted. -
[=<peerName>](Optional): A peer is a field or property in the source message For example an HTTP request may have a header with the name date, which can be requested using the following input struct tag:req:"header=Date"
See HTTP Request Ports and Stream Ports for available field groups and peer name conventions for your specific protocol.
Output Ports
An Output Port is a go structure used to describe a target payload to be generated, such as an HTTP Response or an outgoing Stream Message.
The port structure must be populated by you before being either:
- passed into the stream Message Publisher; or
- returned from your HTTP Endpoint Controller
go-msx will validate the contents of your structure and if it is valid, publish the message or response.
An example stream message output port:
type driftCheckResponseOutputs struct {
EventType string `out:"header" const:"DriftCheck"`
Payload api.DriftCheckResponse `out:"body"`
}
An example HTTP response input port:
type createEntityOutputs struct {
Payload api.CreateEntityResponse `resp:"body"`
}
Struct Tags
Each field with an output struct tag will be applied to the outgoing payload.
Note that the struct tag prefix depends on the protocol being described:
- For HTTP Requests, the output struct tag must be
resp(for backwards compatibility) - For Stream Messages, the output struct tag must be
out.
The full syntax of the input struct tag is one of the following, appropriate for the handling protocol:
output:"<fieldGroup>[=<peerName>]"
resp:"<fieldGroup>[=<peerName>]"
The output struct tag contains the following subcomponents:
-
<fieldGroup>(Required): The name of the message/response part to which the value will be injected. -
[=<peerName>](Optional): A peer is a field or property in the target payload. For example an HTTP response may have a header with the name date, which can be requested using the following output struct tag:resp:"header=Date"
See HTTP Response Ports and Stream Ports for available field groups and peer name conventions for your specific protocol.
Validation
Port structures use struct tags to declare JSON Schema constraints for fields within the port and data transfer objects.
type driftCheckResponseInput struct {
EventType string `in:"header" const:"DriftCheck"`
Payload api.DriftCheckResponse `in:"body"`
}
For example, in the driftCheckRequestInput port, the EventType field
specifies it must contain the constant value DriftCheck through the const tag.
When generating AsyncApi documentation, validation constraints specified in the port struct or the data transfer object will automatically be included in the documentation.
JSON Schema Struct Tags
JSON Schema is a domain-specific language used to describe constraints on values that may be expressed using the JSON type system (null, object, array, number, string).
The following struct tags may be used to specify JSON Schema constraints and validators within a port struct or a data transfer object:
-
optional,requiredBoolean entries to override the base field type optionality. By default, pointer/slice/map and
types.Optionaltypes are optional, and other types are non-optional. Must be "true" or "false".type inPort struct { Expiry types.Duration `in="header" optional="true"` BestBefore *types.Duration `in="header" required="true"` } -
deprecatedBoolean to indicate the field should not be used and will be removed. Must be "true" or "false".
type inPort struct { VmsTenant types.UUID `in="header" deprecated="true"` } -
title,descriptionExpository language to help users understand the purpose of the field.
type outPort struct { ContentType string `in="header" title="Content Type" description="MIME type for the message body" example="application/xml"` } -
constValue that the field must contain to be valid. Equivalent to a single-valued
enum. Must be a scalar convertible to a valid value of the field data type. See "Port Field Constraint Handling", below for more details.type MyRequest struct { Answer int `in="header" const="42"` } -
defaultDefault value that the field will behave as having if not explicitly specified. Must be a scalar convertible to a valid value of the field data type. See "Port Field Constraint Handling", below for more details.
type MyRequest struct { Pi float64 `in="header" default="3.14"` } -
exampleExample value to be presented in the schema document. Must be a scalar convertible to a valid value of the field data type.
type MyRequest struct { Hour int `in="header" example="12"` Minute int `in="header" example="30"` } -
enumComma-separated list of possible values for the field. Only these values will be accepted by the field during validation. Must be scalars convertible to the field data type.
type MyResponseOutput struct { Code int `out="code" enum="200,400,401,403,404"` } -
minimum,maximumRange constraints for possible values of the field. Only values >= minimum (if specified) are valid. Only values <= maximum (if specified) are valid. Must be scalars convertible to the field data type.
type MyResponseOutput struct { Radians float64 `out="header" minimum="0" maximum="6.28"` } -
minLength,maxLengthLength constraints for value of the field. Applies to
stringfields. Must be integers if specified.type MyResponse struct { ServiceType string `in="header" minLength="4" maxLength="16"` } -
maxProperties,minPropertiesLength constraints for value of the field. Applies to
objectfields. Must be integers if specified.type MyResponse struct { ServiceType map[string]string `in="header" minProperties="1"` } -
patternRegular expression that values must match to be valid. Applies to
stringfields.type MyResponse struct { DeviceId string `in="header" pattern="^CPE-.*$"` } -
formatString identifier of pre-defined formats. Applies to
stringfields. Normally will be automatic based on the underlying field type.type MyResponse struct { When types.Time `in="header" format="date"` } -
minItems,maxItemsLength constraints for value of the field. Applies to
arrayfields (slices). Must be integers if specified.type MyRequest struct { DeviceIds []types.UUID `in="body" minItems="1"` TenantIds []types.UUID `in="body" maxItems="1"` }
The underlying jsonschema-go library provides a few more constraints, which you can view at the package GoDoc
Special Tags
Ports have a few special tags which can be applied at the top level:
-
validationBoolean enabling or disabling validation. Documentation will still be generated as if validation is enabled.
Constraints on Named Types
Fields in Port structures and DTOs with simple and anonymous types may be augmented using the JSON schema tags above. However, named types are shared across many fields and therefore cannot be augmented in-place. For example:
type DriftCheckRequest struct {
Action string `json:"action" const:"checkDrift"`
GroupId types.UUID `json:"groupId,omitempty"`
Timestamp types.Time `json:"timestamp" minimum:"2022-01-01T00:00:00Z"`
EntityLevelCompliance string `json:"entityLevelCompliance" enum:"full,partial"`
Standards []ConfigPayload `json:"standards,omitempty" minItems:"1" required:"true"`
...
}
Fields with Named Types include:
GroupId:types.UUIDTimestamp:types.Time
These fields will ignore any schema constraints declared in the struct tag,
such as the minimum tag on Timestamp.
Fields with Simple or Anonymous Types include:
Action:stringEntityLevelCompliance:stringStandards:[]ConfigPayload
Each of these fields has schema constraints declared which will be honoured.
Standards is an array of DTOs []ConfigPayload and therefore is of an anonymous
type.
Constraints on DTO structs
To configure a parent DTO struct using struct tags, include an anonymous
field _ with the desired constraints. For example:
type RemediateRequest struct {
...
_ struct{} `additionalProperties:"false" description:"RemediateRequest contains a remediation request."`
}
This will add description and additionalProperties schema constraints
to the RemediateRequest struct in the schema.
Custom Schema Generation for Named Types
The underlying jsonschema-go library provides a number of interfaces to customize or replace the JSON schema generated for your Named Type:
NamedEnum- Provides a list of name/value pairs for your enumerable type.Enum- Provides a list of values for your enumerable type.Preparer- Intercepts the reflected JSON Schema and allows alteration.Exposer- Provides a complete parsed JSON Schema for your type.RawExposer- Provides a complete unparsed JSON Schema for your type.OneOfExposer- Provides a list ofoneOfelements for your type.AnyOfExposer- Provides a list ofanyOfelements for your type.AllOfExposer- Provides a list ofallOfelements for your type.NotExposer- Provides anotelement for your type.IfExposer- Provides anifelement for your type.ThenExposer- Provides athenelement for your type.ElseExposer- Provides anelseelement for your type.
You can find more details about these interfaces on the package GoDoc.
Port Field Constraint Handling
To ease development burden, when using const or default on a Port Field, the value will be applied
during input population (subscriber) or output population (publisher). Note that this only applies to
scalars (e.g. headers), and only those directly contained in the Port structure. In particular, it does
not apply to the request/response body or its sub-fields.
From the example at the beginning of the chapter:
type driftCheckRequestOutput struct {
EventType string `out:"header" const:"DriftCheck"`
Payload DriftCheckRequest `out:"body"`
}
type DriftCheckRequest struct {
Action string `json:"action" const:"checkDrift"`
...
}
The EventType field of driftCheckRequestOutput will be filled with DriftCheck if not supplied
by the publisher, since it is a scalar, and directly contained within the Port structure. If another
value is supplied, the schema validation will fail, so it is best to simply not supply the value.
The Action field of DriftCheckRequest will not be filled with checkDrift since it is not
directly contained within the Port structure. It will be validated during schema validation to ensure
only that value is supplied.
Services
A service is a reusable part of an application. Services are used to isolate responsibility and are composed to provide functionality. Example services include:
- REST Controller
- Application Service
- Stream Message Subscriber
- Database Repository
- API Integration
Each service may have a number of supporting components and functions:
- Interface Definition
- Mock
- Structure
- Dependencies
- Implementation
- Abstract Constructor
- Lifecycle Registration (for root components)
Let's look at each of these using an example Application Service, HelloWorldService.
Interface Definition
To enable test duplicate (mock) substitutions, you should define an interface declaring the
public methods of your component. In our example, we have one method, SayHello:
type HelloWorldService interface {
SayHello(context.Context) (string, error)
}
This interface type will be used later by our Abstract Constructor to ensure we provide any substituted dependency instead of returning a live object when requested during testing.
The interface should be externally visible (capitalized) so that other modules can re-use it. This opposes the standard go convention of "interface definition by consumer" enabled by duck typing, however it allows you to pre-generate mocks for your consumers' testing needs.
Mock
Each service (other than root components) will be re-used by one ore more other services, and therefore should provide a mock. Using mockery, for example:
//go:generate mockery --name=HelloWorldService --case=snake --with-expecter
This mock will be generated automatically when you run go generate, such as when using
the make generate target.
Structure
Each service is defined using a simple go structure:
type helloWorldService struct {}
In most situations, you do not want to make the implementation visible outside the current module, and therefore the structure name should start with a lowercase letter. Consumers of your structure will receive a reference via the Interface, which will be visible externally.
Dependencies
A service often depends on other services (provided by the go-msx framework, your application,
or third parties). These dependencies are declared in the service structure. For example,
our HelloWorldService can depend on a repository:
type helloWorldService struct {
helloWorldRepository HelloWorldRepository
}
Dependencies should be declared in the structure by referring to their abstract (interface) type, so that during coverage testing, you can use Mocks to test all code paths.
Module dependencies should not be declared in the structure, but rather in the local pkg.go.
This includes loggers.
Implementation
Each service will have a series of public functions matching the Interface Definition:
func (r *helloWorldService) SayHello(ctx context.Context) (string, error) {
return "Hello, World", nil
}
Service methods should use a pointer receiver, as they will be passed around on the heap inside an interface reference.
Context Accessor
Each substitutable component in go-msx requires a context accessor to allow injecting and inspecting overrides:
func ContextHelloWorldService() types.ContextAccessor[HelloWorldService] {
return types.NewContextAccessor[HelloWorldService](contextKeyNamed("HelloWorldService"))
}
Key to type safety is the external invisibility of the context key. This is guaranteed
by defined a module-local type (contextKey or contextKeyNamed) and using an instance of it
to index the context inspection/injection.
Abstract Constructor
To manage the injection of dependencies, go-msx applications use abstract constructors in the style of go factories. In particular, they:
- return a reference to an interface type instead of the concrete implementation;
- check the passed-in Context for overrides for the component, and if found, return it;
- fail on error in constructing any subcomponents
- check configuration to select from alternative dependencies
Our service has a single, simple dependency:
func NewHelloWorldService(ctx context.Context) (result HelloWorldService, err error) {
var ok bool
if result, ok = ContextHelloWorldService().TryGet(ctx); !ok {
helloWorldRepository, err := NewHelloWorldRepository(ctx)
if err != nil {
return nil, err
}
result = &helloWorldService{
helloWorldRepository: helloWorldRepository,
}
}
return
}
Lifecycle Registration
For root components (those not instantiated by other components), you must instantiate them
during application startup. Components that should be created for all commands should use
the OnEvent registration:
func init() {
var svc helloWorldService
app.OnEvent(
app.EventStart,
app.EventDuring,
func (ctx context.Context) (err error) {
svc, err = NewHelloWorldService(ctx)
return err
})
}
REST API Controller
MSX promotes the usage of the common Controller > Service > Repository layered architecture within microservices.
The role of the Controller is to accept REST-based API requests from callers (UI, swagger, other microservices), and route them to the service.
Defining the Controller structure
To define a controller, create a standard Go structure with fields for its required dependencies:
type productController struct {
productService *productService
productConverter *productConverter
}
This example shows two common dependencies:
- Service
- The service is responsible for responding to the requests. The controller acts as an HTTP gateway to the service functionality.
- Converter
- The converter transforms data transfer objects (requests and response) to and from domain models.
Implementing the RestController interface
For registration with the web server, the webservice.RestController interface defines a single required method, Routes.
Add a standard implementation to your controller, for example:
func (c *productController) Routes(svc *restful.WebService) {
tag := webservice.TagDefinition("Products", "Products Controller")
webservice.Routes(svc, tag,
c.listProducts,
c.getProduct,
c.createProduct,
c.updateProduct,
c.deleteProduct)
}
This implementation demonstrates:
- Adding each endpoint implementation to the supplied WebService.
- Tagging the routes for Swagger. This allows the Swagger UI to show the human-readable controller name and group endpoints properly. Note that the tag does not have to be unique, and can be declared at module level to be used across multiple controllers (eg v1, v2). This will show all of the endpoints from the chosen controllers in a single group.
Implementing an Endpoint
Each endpoint on your controller should be declared inside its own method. Here's an example implementation of a List endpoint for the Products controller:
var viewPermissionFilter = webservice.PermissionsFilter(rbac.PermissionViewProduct)
func (c *productController) listProducts(svc *restful.WebService) *restful.RouteBuilder {
type params struct {
Category *string `req:"query"`
}
return svc.GET("").
Operation("listProducts").
Doc("Retrieve the list of products, optionally filtering by the specified criteria.").
Do(webservice.StandardList).
Do(webservice.ResponsePayload(api.ProductListResponse{})).
Do(webservice.PopulateParams(params{})).
Filter(viewPermissionFilter).
To(webservice.Controller(
func(req *restful.Request) (body interface{}, err error) {
params = webservice.Params(req).(*params)
products, err := c.productService.ListProducts(req.Request.Context(), params.Category)
if err != nil {
return nil, err
}
return c.productConverter.ToProductListResponse(products), nil
}))
}
Here we are declaring the endpoint:
type params ...- accepts an optional string parameter
categoryas a query parameter
- accepts an optional string parameter
svc.GET- will use the GET HTTP method
GET("")- has the same path as the controller
Operation("listProducts")- has the operation name
listProducts. This will appear in tracing, logs, and in the swagger definition.
- has the operation name
Doc("...")- has the supplied description in the Swagger UI
Do(webservice.StandardList)- is an implementation of a List Collection endpoint. Returns 200 by default.
Do(webservice.PopulateParams(params{}))- populates request parameters into the supplied structure
Do(webservice.ResponsePayload(api.ProductListResponse{}))- will return the specified response DTO, wrapped inside an MsxEnvelope object.
Filter(viewPermissionFilter)- will check that callers have the "VIEW_PRODUCT" permission, as defined in the viewPermissionFilter object
To(webservice.Controller(func ...))- will execute the supplied function when this endpoint is called
These are some possible route building functions available in go-msx. You may use the go-restful routing functions, along with the go-msx routing functions to define many aspects of your endpoint.
Validation
Controller parameter validation can be performed in two ways:
-
Any member of the
paramsstruct passed intowebservice.PopulateParamswhich implement thevalidate.Validatableinterface will be validated before being passed into the controller via a request attribute.Create and Update request bodies will generally implement this interface. For example:
type SubscriptionCreateRequest struct { OfferId string `json:"offerId"` TenantId string `json:"tenantId"` ServiceId string `json:"serviceId"` } func (s *SubscriptionCreateRequest) Validate() error { return types.ErrorMap{ "offerId": validation.Validate(&s.OfferId, validation.Required, is.UUID), "tenantId": validation.Validate(&s.TenantId, validation.Required, is.UUID), "serviceId": validation.Validate(&s.ServiceId, validation.Required, is.UUID), } } -
A custom validation function may be provided using
.Do(requestValidatorFunc). Extending the example above which contains aCategoryparameter:... Do(webservice.ValidateParams(func(req *restful.Request) (err error) { params, ok := webservice.Params(req).(*params) if !ok { return webservice.NewInternalError(errors.New("incorrect params type")) } return types.ErrorMap{ "category": validation.Validate(¶ms.Category, validation.Required, validation.In("a", "b", "c")), } })).
Any non-nil errors returned by the validation function will cause a 400 BAD REQUEST response detailing
the validation errors.
Common validators are provided by the github.com/go-ozzo/ozzo-validation package. A few custom validators are available in the validate package.
Implementing a Constructor
To allow instantiation of your controller, you can provide a constructor:
func newProductController(ctx context.Context) webservice.RestController {
return &productController{
productService: newProductService(ctx),
productConverter: productConverter{},
}
}
In this case, we expect the product service to be injectable, so we use its constructor function to create an instance of the dependency. This simplifies unit testing by allowing us to inject a mock for the service.
In contrast, we do not expect to use a mock converter, so it is instantiated directly.
Connecting the Controller to the Application Lifecycle
In order to instantiate your controller during application startup, you can register a simple
init function:
func init() {
app.OnEvent(app.EventCommand, app.CommandRoot, func(ctx context.Context) error {
app.OnEvent(app.EventStart, app.PhaseBefore, func(ctx context.Context) error {
controller := newProductController(ctx)
return webservice.
WebServerFromContext(ctx).
RegisterRestController(pathRoot, controller)
})
return nil
})
}
This will register your controller during normal microservice startup. Since it
is only registering for CommandRoot, it will not be created during migrate,
populate or other custom command execution.
To ensure your module is included in the built microservice, include the module from your main.go:
import _ "cto-github.cisco.com/NFV-BU/productservice/internal/products"
REST API Controller
MSX promotes the usage of the common Controller > Service > Repository layered architecture within microservices.
The role of the Controller is to accept REST-based API requests from callers (UI, swagger, other microservices), and route them to the service.
To generate a complete domain, including the controller, use the skel tool.
In the Services section, we described the various components of every service, including REST API Controllers. The following sections assumes your familiarity with those components.
REST Controllers have specific requirements for:
- Interface Definition
- Mock
- Dependencies
- Implementation
- Lifecycle Registration (for root components)
Interface Definition
The REST controller component is never mocked as it is a root component. No interface definition is required.
Mock
The REST controller component is never mocked as it is a root component. No mock generation is required.
Dependencies
REST Controllers typically depend exclusively on the Application Service. For example:
type applicationController struct {
applicationService ApplicationServiceApi
}
The converter dependency has moved to the Application Service, which now receives and returns DTOs.
Implementation
EndpointsProducer
Modern go-msx REST Controllers must implement the restops.EndpointsProducer interface.
The Endpoints function returns a list of endpoints implemented by the controller:
func (c *applicationController) Endpoints() (restops.Endpoints, error) {
builders := restops.EndpointBuilders{
c.getConfigApplicationResultsByConfigApplicationId(),
...
}
return builders.Endpoints()
}
Each endpoint or builder is generated by calling a method on the controller from
the Endpoints function. These are then aggregated and returned as a slice of Endpoints.
EndpointTransformersProducer
A controller may also implement the restops.EndpointTransformersProducer interface
in order to apply transformations to each of the registered endpoints, including tagging
and path manipulation:
func (c *applicationController) EndpointTransformers() restops.EndpointTransformers {
openapi.AddTag("Applications", "Configuration Applications Controller")
return restops.EndpointTransformers{
restops.AddEndpointPathPrefix(pathPrefixConfiguration),
restops.AddEndpointTag("Applications"),
}
}
Current transformers are:
AddEndpointTag: Adds a tag to each endpointAddEndpointPathPrefix: Adds a prefix to the path of each endpointAddEndpointErrorConverter: Sets a custom ErrorConverter for each endpointAddEndpointErrorCoder: Sets a custom ErrorCoder for each endpointAddEndpointContextInjector: Adds a Context injector to each endpointAddEndpointMiddleware: Adds an HTTP Middleware to each endpoint
EndpointBuilder
Each endpoint in the controller can be generated using one of the provided EndpointBuilder
instances. go-msx provides builders for each active API style:
- v2: Uses response envelopes and v2 pagination style
- v8: Uses no response envelopes, v8 error response format, and v8 pagination style
Endpoint Types
Each API style builder provides a number of different endpoint types:
- List: Return a series of entities matching a given criteria
- Retrieve: Returns a single entity matching a primary key
- Create: Instantiates a new entity using the supplied payload
- Update: Replaces an existing entity using the supplied payload
- Delete: Destroys an existing entity matching a primary key
- Command: Executes an operation specific to the entity domain
Example
Here is an example (from lanservice) of a simple "Command" endpoint:
func (c *applicationController) applyConfiguration() restops.EndpointBuilder {
type inputs struct {
Id types.UUID `req:"path"`
IncludeSubTenants bool `req:"query" required:"false" default:"false" description:"Include Sub Tenants"`
}
type outputs struct {
Body api.ConfigurationResponse `resp:"body"`
}
return v2.
NewCommandEndpointBuilder(pathSuffixConfigurationId, "applications").
WithId("applyConfigurationForTenant").
WithInputs(inputs{}).
WithOutputs(outputs{}).
WithPermissions(permissionManageSwitchConfigurations).
WithDoc(new(openapi3.Operation).
WithSummary("Apply configuration to a tenant")).
WithHandler(func (ctx context.Context, inp *inputs) (out outputs, err error) {
out.Body, err = c.applicationService.
ApplyConfiguration(ctx, inp.Id, inp.IncludeSubTenants)
return
})
}
Above, you can see a number of different components:
- input port structure: defines the fields to be retrieved from the request
- output port structure: defines the fields to be applied to the response
- builder: simplifies creating endpoints
- operation name: defines the operation key in OpenApi and Tracing
- permissions: enumerates the possible passing permission(s)
- handler: called when the endpoint is activated
- documentation: populates the OpenApi documentation
Handler
The Endpoint Handler accepts functions with arbitrary arguments, which it will fill out by matching the argument type. These can include:
context.Context: The context of the request*http.Request: The inbound HTTP request being handled, allowing for manual request parsinghttp.ResponseWriter: The outbound HTTP response to return, allowing for manual response handling*inputs: The Input Port structure declared by a call toWithInputs, containing the populated inputs
The Endpoint Handler also accepts functions with arbitrary return values, which it will consume:
outputs: The Output Port structure declared by a call toWithOutputs, which you can populate with response outputs.error: An error to be applied to the defined (or style default) error response body.
If the Output Port structure is excluded from the declaration of
your return values, you are expected to use an http.ResponseWriter to manually
send the success response (or return an error).
If both the Output Port structure and error are excluded, you are expected
to manually send a response (whether error or success).
Standard Practice
The most common format for handler function includes context, inputs, outputs, and error:
.WithHandler(
func (ctx context.Context, inp *inputs) (out outputs, err error) {
...
})
Manual Handling
To manually handle the request/response cycle, use a standard go HTTP handler:
.WithHttpHandler(
func (resp http.ResponseWriter, req *http.Request){
...
})
Custom Validation
Endpoint parameter validation can be performed in two ways:
-
Define struct tags on each field declaring the jsonschema validation that is required. This is used to validate the format of strings, enumerations, etc.
-
Any member of the
inputsstruct passed intoEndpoint.Inputswhich implement thevalidate.Validatableinterface will be validated before being passed into the controller. Create and Update request bodies with complex inter-field interactions will typically use this. Common validators are provided by the github.com/go-ozzo/ozzo-validation package. A few custom validators are available in the validate package.
Any non-nil errors returned by the validation function will cause an instance of ValidationErrors
to be sent back to the client (with a 400 Bad Request header) detailing the errors.
Response Codes
Success Responses
To use the default success status code (determined by which builder you used), no implementation is required.
To override the success code, add a Code int resp:"code" field to your output port struct
and populate it before returning from your handler.
Error Responses
REST operations have built-in default error coder, which you can override using a custom error mapper or error coder.
Default mappings include:
| Error | Code |
|---|---|
js.ErrValidationFailed | 400 |
ops.ErrMissingRequiredValue | 400 |
rbac.ErrTenantDoesNotExist | 401 |
rbac.ErrUserDoesNotHaveTenantAccess | 401 |
repository.ErrAlreadyExists | 409 |
repository.ErrNotFound | 404 |
Lifecycle Registration
In order to instantiate your controller during application startup, you can register a simple
init function:
func init() {
app.OnCommandsEvent(
[]string{
app.CommandRoot,
app.CommandOpenApi,
},
app.EventStart,
app.PhaseBefore,
func (ctx context.Context) error {
controller, err := newApplicationController(ctx)
if err != nil {
return err
}
return restops.
ContextEndpointRegisterer(ctx).
RegisterEndpoints(controller)
})
}
This will register your controller during normal microservice startup, as well as during OpenApi spec generation.
To ensure your module is included in the built microservice, include the module from your main.go:
import _ "cto-github.cisco.com/NFV-BU/lanservice/internal/application"
REST Input Ports
As described in Input Ports, an Input Port is a go structure used to describe a source payload to be parsed, such as an HTTP REST API Request.
Each field in the Input Port structure is expected to have a req struct tag. Any fields
missing this tag will be ignored by the input populator.
The structure includes any required or optional parameters (Cookies, Form, Headers, Path, Query), along with any expected body content.
Example
The following example shows a simple Create API input port definition:
type createEntityInputs struct {
ControlPlaneId types.UUID `req:"path"`
Payload api.CreateEntityRequest `req:"body"`
}
In this example, ControlPlaneId is expected to be found in the path (with the default
path parameter style, controlPlaneId). The body is expected to contain a JSON-serialized
instance of api.CreateEntityRequest.
Field Groups
The possible field groups used by the req struct tag are:
- method: The HTTP method
- header: An HTTP header
- cookie: An sub-entry from the Cookie header
- path: A segment of the path
- query: A query parameter
- form: A form field
- body: The body content
Field Index
Each field will typically have a group (source) and index (key).
You will recall the format of the req tag:
req:"<fieldGroup>[=<fieldIndex>]"
Most field indices default to the lowerCamelCase inflection of the field name. The only exception is for headers, which are the Upper-Kebab-Case inflection of the field name by default.
Non-indexed fields such as method and body do not accept a field index, and
they will be ignored if specified; there is only one of each of these in any request.
REST Output Ports
As described in Output Ports, an Output Port is a go structure used to describe a destination payload to be generated, such as an HTTP REST API Response. An Output port is the Response equivalent of an Input Port, and follows many of the same rules and patterns.
Each field in the Output Port structure is expected to have a resp struct tag. Any fields
missing this tag will be ignored by the output populator.
The structure includes any required or optional results, including status code, headers, paging, and success and/or error bodies.
Example
The following example shows a simple Create API output port definition:
type createEntityInputs struct {
Payload api.CreateEntityResponse `resp:"body"`
}
The response body will be populated with a JSON-serialized instance of api.CreateEntityResponse.
Field Groups
The possible field groups used by the resp struct tag are:
- code: The HTTP status code for the response
- header:
- paging: An envelope wrapping the body containing the paging response
- body: The primary payload of the response (excluding any envelopes/paging).
You may also specify
success:"true"orerror:"true"to define multiple potential bodies.
Field Index
Each header field will have a group (source) and index (key).
You will recall the format of the resp tag:
resp:"<fieldGroup>[=<fieldIndex>]"
Header field indices default to the Upper-Kebab-Case inflection of the field name by default.
Non-indexed fields such as code and body do not accept a field index, and
they will be ignored if specified; there is only one of each of these in any
generated response.
Middleware
You can use a Mediator/Middleware component to augment the functionality of go-msx components such as endpoints:
Middleware is software that's assembled into an app pipeline to handle requests and responses. Each component:
- chooses whether to pass execution to the next component in the pipeline.
- can perform work before and after the next component in the pipeline.
-- ASP.NET Core Middleware, Microsoft
Go HTTP Middleware factories implement a de facto function signature:
type Middleware func(next http.Handler) http.Handler
func myMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Middleware BEFORE logic goes here...
next.ServeHTTP(w, r)
// Middleware AFTER logic goes here...
})
}
The factory accepts the subsequent Handler in the middleware chain, and returns a new Handler which wraps it with the desired added functionality. You can find more details and examples on this blog post.
Available HTTP Middleware
go-msx does not currently define any HTTP Middleware, however there are many libraries available, including:
SQL Database Repository
MSX promotes the usage of the common Controller > Service > Repository layered architecture within microservices.
The role of the Repository is to query and mutate the persistent storage of Models.
Defining the Repository
To define a repository, create a standard Go structure with an anonymous field for the CrudRepository:
type deviceSqlRepository struct {
sqldb.CrudRepositoryApi
}
The CrudRepositoryApi object provides access to the SQL database underneath using generic objects
and slices.
Writing a Constructor
A standard repository constructor allows for dependency injection (during testing) and normal creation (during runtme):
func newDeviceRepository(ctx context.Context) deviceRepositoryApi {
repo := deviceRepositoryFromContext(ctx)
if repo == nil {
repo = &deviceSqlRepository{
CrudRepositoryApi: sqldb.
CrudRepositoryFactoryFromContext(ctx).
NewCrudRepository("device"),
}
}
return repo
}
- The
CrudRepositoryFactoryallows us to test the repository without requiring an actual database implementation. - The
deviceRepositoryFromContextallows us to test this repositories reverse-dependencies without requiring an actualdeviceSqlRepository.
Implementing Common Access Methods
A basic repository will likely have the following common methods:
FindAll- Retrieve all models
FindByKey- Retrieve a single model by its primary key
Save- Store a single model
Delete- Remove a single model
More advanced repositories may have some less-common methods:
FindAllByIndexXXX- Retrieve all models matching the specified criteria using an index
FindAllPagedBy- Retrieve a subset of models matching the specified criteria, using the specified sorting and pagination
Truncate- Remove all models
FindAll
func (r *deviceSqlRepository) FindAll(ctx context.Context) (results []device, err error) {
logger.WithContext(ctx).Info("Retrieving all Device records")
err = r.CrudRepositoryApi.FindAll(ctx, &results)
return
}
- Log method intention
- Delegate to our internal CrudRepository to perform the record retrieval and struct mapping.
FindByKey
func (r *deviceSqlRepository) FindByKey(ctx context.Context, name string) (result *device, err error) {
logger.WithContext(ctx).Infof("Retrieving Device by key %q", name)
var res device
err = r.CrudRepositoryApi.FindOneBy(ctx, map[string]interface{}{
"name": name,
}, &res)
if err == sqldb.ErrNotFound {
err = repository.ErrNotFound
} else if err == nil {
result = &res
}
return
}
- Log method intention, including the primary key
- Delegate to our internal CrudRepository to perform the record retrieval and struct mapping.
- The
CrudRepositoryApi.FindOneBymethod accepts a map of criteria to search by -- in this case, the primary key.
- The
- Normalize the
sqldberror code to userepositoryerror codes.
Save
func (r *deviceSqlRepository) Save(ctx context.Context, device device) (err error) {
logger.WithContext(ctx).Infof("Storing Device with key %q", device.Name)
return r.CrudRepositoryApi.Save(ctx, device)
}
- Log method intention, including the primary key
- Delegate to our internal CrudRepository to perform the record storage and struct mapping.
- The
CrudRepositoryApi.Savemethod performs anUPSERTquery in Cockroach so it behaves in much the same way as theSavemethod from a KV repository.
- The
Delete
func (r *deviceSqlRepository) Delete(ctx context.Context, name string) (err error) {
logger.WithContext(ctx).Infof("Deleting Device by key %q", name)
return r.CrudRepositoryApi.DeleteBy(ctx, map[string]interface{}{
columnDeviceName: name,
})
}
- Log method intention, including the primary key
- Delegate to our internal CrudRepository to perform the record retrieval and struct mapping.
- The
CrudRepositoryApi.DeleteBymethod accepts a map of criteria to delete by -- in this case, the primary key.
- The
Transaction Support
err := sqldb.NewTransactionManager().
WithTransaction(ctx, func(ctx context.Context) error {
// do all your db processes in here (preferably prepared)
// then return nil to commit or an error to rollback
// return errors.New("some error") // to rollback
return nil // to commit
})
TypedRepository
TypedRepository should be the most used going forward and it should be able to cater to most repository needs. It is most encouraged to use this repository.
If developers feel there are anything missing or things that can be improved on (specially if you feel you still need to use any repo aside from TypedRepository) please reach out to the #go-msx team.
Insert
err = personsRepo.Insert(ctx, person1)
Upsert
err = personsRepo.Upsert(ctx, person1)
Update
err = personsRepo.Update(ctx, goqu.Ex(map[string]interface{}{"id": person1.Id}), person1)
CountAll
err = personsRepo.CountAll(ctx, &count, nil)
FindOne
err = personsRepo.FindOne(ctx, &destPerson, sqldb.And(map[string]interface{}{"id": person1.Id}).Expression())
FindAll
pagingResponse, err := personsRepo.FindAll(ctx, &destPersons,
sqldb.Where(goqu.Ex(map[string]interface{}{"name": person1.Name})),
sqldb.Keys(goqu.Ex(map[string]interface{}{"id": person1.Id})),
sqldb.Distinct("name"),
sqldb.Sort([]paging.SortOrder{paging.SortOrder{Property: "name", Direction: "ASC"}}),
sqldb.Paging(paging.Request{Size: 10, Page: 0}),
)
DeleteOne
err = personsRepo.DeleteOne(ctx, goqu.Ex(map[string]interface{}{"id": person1.Id}))
DeleteAll
err = personsRepo.DeleteAll(ctx, goqu.Ex(map[string]interface{}{"name": person1.Name}))
Truncate
err = personsRepo.Truncate(ctx)
Complete Code Examples
First create the persons table in order for the sample codes below to work.
CREATE TABLE persons ( id UUID PRIMARY KEY, name STRING );
type Person struct {
Id uuid.UUID `db:"id"`
Name string `db:"name"`
}
personsRepo, err := sqldb.NewTypedRepository[Person](ctx, "persons")
if err != nil {
logger.WithContext(ctx).Error(err)
}
person1 := Person{Id: uuid.MustParse("437f96b0-6722-11ed-9022-0242ac120009"), Name: "Jonee"}
err = personsRepo.Insert(ctx, person1)
if err != nil {
logger.WithContext(ctx).Error(err)
}
person1.Name = "Jonee6"
err = personsRepo.Upsert(ctx, person1)
if err != nil {
logger.WithContext(ctx).Error(err)
}
person1.Name = "Jonee7"
err = personsRepo.Update(ctx, goqu.Ex(map[string]interface{}{"id": person1.Id}), person1)
if err != nil {
logger.WithContext(ctx).Error(err)
}
count := int64(0)
err = personsRepo.CountAll(ctx, &count, nil)
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(count)
var destPerson Person
err = personsRepo.FindOne(ctx, &destPerson, sqldb.And(map[string]interface{}{"id": person1.Id}).Expression())
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(destPerson)
var destPersons []Person
pagingResponse, err := personsRepo.FindAll(ctx, &destPersons,
sqldb.Where(goqu.Ex(map[string]interface{}{"name": person1.Name})),
sqldb.Keys(goqu.Ex(map[string]interface{}{"id": person1.Id})),
sqldb.Distinct("name"),
sqldb.Sort([]paging.SortOrder{paging.SortOrder{Property: "name", Direction: "ASC"}}),
sqldb.Paging(paging.Request{Size: 10, Page: 0}),
)
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(pagingResponse)
logger.WithContext(ctx).Info(destPersons)
err = personsRepo.DeleteOne(ctx, goqu.Ex(map[string]interface{}{"id": person1.Id}))
if err != nil {
logger.WithContext(ctx).Error(err)
}
err = personsRepo.DeleteAll(ctx, goqu.Ex(map[string]interface{}{"name": person1.Name}))
if err != nil {
logger.WithContext(ctx).Error(err)
}
err = personsRepo.Truncate(ctx)
if err != nil {
logger.WithContext(ctx).Error(err)
}
GoquRepository
GoquRepository is for those who may want more flexibility (that otherwise TypedRepository cannot provide) and would like to work on the Goqu level.
If developers feel there are anything missing or things that can be improved on (specially if you feel you still need to use any repo aside from TypedRepository) please reach out to the #go-msx team.
Insert
dsInsert := rgoqu.Insert("persons")
err = rgoqu.ExecuteInsert(ctx, dsInsert.Rows(person2))
Upsert
dsUpsert := rgoqu.Upsert("persons")
err = rgoqu.ExecuteUpsert(ctx, dsUpsert.Rows(person2))
Update
dsUpdate := rgoqu.Update("persons")
err = rgoqu.ExecuteUpdate(ctx, dsUpdate.Where(goqu.Ex(map[string]interface{}{"id": person2.Id})).Set(person2))
Get
dsGet := rgoqu.Get("persons")
err = rgoqu.ExecuteGet(ctx, dsGet.Where(goqu.Ex(map[string]interface{}{"id": person2.Id})), &destPerson2)
Select
dsSelect := rgoqu.Select("persons")
err = rgoqu.ExecuteSelect(ctx, dsSelect.Where(goqu.Ex(map[string]interface{}{"name": person2.Name})), &destPersons2)
Delete
dsDelete := rgoqu.Delete("persons")
err = rgoqu.ExecuteDelete(ctx, dsDelete.Where(goqu.Ex(map[string]interface{}{"id": person2.Id})))
Truncate
dsTruncate := rgoqu.Truncate("persons")
err = rgoqu.ExecuteTruncate(ctx, dsTruncate)
Complete Code Examples
First create the persons table in order for the sample codes below to work.
CREATE TABLE persons ( id UUID PRIMARY KEY, name STRING );
type Person struct {
Id uuid.UUID `db:"id"`
Name string `db:"name"`
}
rgoqu, err := sqldb.NewGoquRepository(ctx)
if err != nil {
logger.WithContext(ctx).Error(err)
}
person2 := Person{Id: uuid.MustParse("437f96b0-6722-11ed-9022-0242ac120005"), Name: "Jonee"}
dsInsert := rgoqu.Insert("persons")
err = rgoqu.ExecuteInsert(ctx, dsInsert.Rows(person2))
if err != nil {
logger.WithContext(ctx).Error(err)
}
dsUpsert := rgoqu.Upsert("persons")
person2.Name = "Jonee6"
err = rgoqu.ExecuteUpsert(ctx, dsUpsert.Rows(person2))
if err != nil {
logger.WithContext(ctx).Error(err)
}
dsUpdate := rgoqu.Update("persons")
person2.Name = "Jonee7"
err = rgoqu.ExecuteUpdate(ctx, dsUpdate.Where(goqu.Ex(map[string]interface{}{"id": person2.Id})).Set(person2))
if err != nil {
logger.WithContext(ctx).Error(err)
}
var destPerson2 Person
dsGet := rgoqu.Get("persons")
err = rgoqu.ExecuteGet(ctx, dsGet.Where(goqu.Ex(map[string]interface{}{"id": person2.Id})), &destPerson2)
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(destPerson2)
var destPersons2 []Person
dsSelect := rgoqu.Select("persons")
err = rgoqu.ExecuteSelect(ctx, dsSelect.Where(goqu.Ex(map[string]interface{}{"name": person2.Name})), &destPersons2)
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(destPersons2)
dsDelete := rgoqu.Delete("persons")
err = rgoqu.ExecuteDelete(ctx, dsDelete.Where(goqu.Ex(map[string]interface{}{"id": person2.Id})))
if err != nil {
logger.WithContext(ctx).Error(err)
}
dsTruncate := rgoqu.Truncate("persons")
err = rgoqu.ExecuteTruncate(ctx, dsTruncate)
if err != nil {
logger.WithContext(ctx).Error(err)
}
SqlRepository
SqlRepository is offered for those who may need to go down to the SQL level. This may be hard to maintain and inflexible (might be error-prone too) and thus discouraged as much as possible.
If developers feel there are anything missing or things that can be improved on (specially if you feel you still need to use any repo aside from TypedRepository) please reach out to the #go-msx team.
SqlExecute
err = rsql.SqlExecute(ctx, "INSERT INTO persons VALUES ($1, $2)", []interface{}{uuid.MustParse("437f96b0-6722-11ed-9022-0242ac120002"), "Jonee"})
SqlSelect
var destPersons3 []Person
err = rsql.SqlSelect(ctx, "SELECT * FROM persons", nil, &destPersons3)
SqlGet
var destPerson3 Person
err = rsql.SqlGet(ctx, "SELECT * FROM persons WHERE id=$1", []interface{}{uuid.MustParse("437f96b0-6722-11ed-9022-0242ac120002")}, &destPerson3)
Complete Code Examples
First create the persons table in order for the sample codes below to work.
CREATE TABLE persons ( id UUID PRIMARY KEY, name STRING );
type Person struct {
Id uuid.UUID `db:"id"`
Name string `db:"name"`
}
rsql, err := sqldb.NewSqlRepository(ctx)
if err != nil {
logger.WithContext(ctx).Error(err)
}
err = rsql.SqlExecute(ctx, "INSERT INTO persons VALUES ($1, $2)", []interface{}{uuid.MustParse("437f96b0-6722-11ed-9022-0242ac120002"), "Jonee"})
if err != nil {
logger.WithContext(ctx).Error(err)
}
var destPersons3 []Person
err = rsql.SqlSelect(ctx, "SELECT * FROM persons", nil, &destPersons3)
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(destPersons3)
var destPerson3 Person
err = rsql.SqlGet(ctx, "SELECT * FROM persons WHERE id=$1", []interface{}{uuid.MustParse("437f96b0-6722-11ed-9022-0242ac120002")}, &destPerson3)
if err != nil {
logger.WithContext(ctx).Error(err)
}
logger.WithContext(ctx).Info(destPerson3)
OpenApi Client
MSX enables generating OpenApi clients with ease.
Client Generation
The following snippets show an example of how to generate an OpenApi client for Manage Microservice V8 APIs:
-
cmd/build/build.yml:# Integration Generation generate: - path: internal/integration/manage openapi: spec: ../.openapi/manage-service-8.yaml config: ../.openapi/manage-service-8-config.json -
internal/integration/.openapi/manage-service-8.yaml: Place the openapi contract in this file. -
internal/integration/.openapi/manage-service-8-config.json:{ "generateInterfaces": true, "structPrefix": false, "packageName": "manage", "enablePostProcessFile": true } -
internal/integration/manage/.openapi-generator-ignore.gitignore go.mod go.sum .openapi-generator-ignore .travis.yml api/** docs/** git_push.sh
After the above pieces are in place, you can execute the generate build step:
make generate
Contract Validation
To ensure the upstream contract remains compatible with your local version:
-
Add the following snippet to
build.yml:# Contract Management openapi: # Remote (consumer) API contract pairs contracts: - consumer: internal/integration/.openapi/manage-service-8.yaml producer: https://cto-github.cisco.com/raw/NFV-BU/msx-platform-specs/develop/manage-service-8.yaml # Sources for well-known schemas alias: - from: https://api.swaggerhub.com/domains/Cisco-Systems46/msx-common-domain/8 to: https://cto-github.cisco.com/raw/NFV-BU/msx-platform-specs/sdk1.0.10/common-domain-8.yamlAny internal GitHub links will use the GitHub Personal Access Token from your environment (
GITHUB_TOKEN) when retrieving the file. Ensure you have an up-to-date PAT configured. -
Add a check to your
build/ci/checks.yml:checks: - name: OpenApi commands: - make: openapi-compare analyzers: - builtin: generate-openapi-reportThis will ensure each commit to your repo checks for backwards-incompatible changes to the contract.
MSX Stream Operations
A Streaming Operation library compatible with AsyncApi 2.x documentation.
Terminology
Message : A discrete unit of communication between a publisher and a set of subscribers. Must include data (payload) and may include metadata (headers).
Channel : A source or destination for message delivery between publishers and subscribers.
Subscriber : A receiver of a sequence of messages from a channel.
Publisher : A sender of a sequence of messages to a channel.
AsyncApi : Documentation standard for describing event-based and streaming message transports such as Kafka, Redis Streams, Amazon SQS. Describes messages, channels, publishers, subscribers, servers, security, and other related concerns. Comparable to OpenApi, which describes REST message transports.
DTO : Data Transfer Object. Used for serialization and deserialization of externally sourced or directed structured values.
Port : Description of the interface between the Stream Operations subsystem and your message publisher or subscriber. Can include headers, filters, and must include a payload DTO.
Components
The following table compares the pattern of components across AsyncApi, Stream Operations, and HTTP components.
| AsyncApi | Stream Publisher | Stream Subscriber | HTTP | Purpose |
|---|---|---|---|---|
| Channel | Channel | Channel | Controller | Domain ingress and egress |
| Operation | ChannelPublisher | ChannelSubscriber | Router | Dispatch to endpoints |
| Message | MessagePublisher | MessageSubscriber | Endpoint | Event processing |
| Header, Payload | Output Port | Input Port | Request/Response | Exchanged data |
Stream Ports
Stream Ports are structures used to describe the parts of incoming and outgoing Stream Messages. For an introduction to Ports, please see Ports.
Input Ports
Input Ports specify fields to be extracted from an incoming stream message or HTTP request.
Stream Input Port struct tags must use the in prefix.
Each field with an in struct tag will be automatically populated before being
passed to your Message Subscriber. The full syntax of the in struct tag is as
follows:
in:"<fieldGroup>[=<peerName>]"
The in struct tag contains the following subcomponents:
<fieldGroup>
: (Required) The name of the message part from which the value will be extracted.
Valid field groups for streaming operations are:
header- Message metadata/headers (string-keyed map of strings).body- Message payload (JSON request body). Max one per port struct.messageId- Unique id of the message (typically a random uuid)
[=<peerName>]
: (Optional)
A peer is the index within the field group of the data for each port field in the original message.
Currently, only header fieldGroup has indexed content (individual header values).
When not specified, the default peer in the metadata is the lowerCamelCase inflection of the field name:
eg the EventType struct field points to the header eventType.
Output Ports
Output ports specify parts of the published message to be populated from the port struct.
Stream Output Port struct tags must use the out prefix.
Each field with an out struct tag will be automatically applied to the message
before the message is published. The full syntax of the out struct tag is as
follows:
out:"<fieldGroup>[=peerName]"
The subcomponents of the out struct tag are the same as in struct tag, above.
Data Transfer Objects (DTOs)
Fields in a port specifying the body component will typically have a DTO struct
as their underlying type (eg. api.DriftCheckRequest above).
By default, these are serialized using the Content-Type of the stream
(currently defaults to application/json).
Publishers
Stream Operations Publishers are used to publish messages on streams. They consist of a number of components:
| Component | AsyncApi Documentor | Documentation Model |
|---|---|---|
| Your Message Publisher (service) | - | - |
| Your Output Port (struct) | asyncapi.MessagePublisherDocumentor | jsonschema.Schema |
| Your Payload DTO (struct) | asyncapi.MessagePublisherDocumentor | jsonschema.Schema |
| streamops.MessagePublisher | asyncapi.MessagePublisherDocumentor | asyncapi.Message |
| streamops.ChannelPublisher | asyncapi.ChannelPublisherDocumentor | asyncapi.Operation |
| streamops.Channel | asyncapi.ChannelDocumentor | asyncapi.Channel |
Components
Channel
The channel component represents the stream itself (SQS or Kafka topic, Redis stream, Go channel, SQLDB table, etc). It is implemented as a singleton that should be created after configuration but before start-up.
Channel Publisher
The channel publisher component represents the set of publishable messages for a given stream. It is implemented as a service that should be created as a dependency of your message publisher.
Message Publisher
The message publisher component represents one of the publishable messages for a given stream.
It is implemented as a service created after configuration but before start-up.
Notice that it has a defined API interface for mocking, and should be mocked by dependent services
during testing.
Output Port
The message port contains a mapping of fields to be set on the outgoing message. Each field will be mapped to a header or body field based on the struct tags.
Message Payload DTO
The payload DTO will contain the body of message that is to be published. Before dispatch to the underlying stream, the message will be validated using the JSON-schema annotations on your DTO.
Generation
It is strongly advised to auto-generate these components and customize them afterwards. See Channels and AsyncApi for details about generation.
Subscribers
Stream Operations Subscribers are used to publish messages on streams. They consist of a number of components:
| Component | AsyncApi Documentor | Documentation Model |
|---|---|---|
| Your Message Subscriber (service) | - | - |
| Your Input Port (struct) | asyncapi.MessageSubscriberDocumentor | jsonschema.Schema |
| Your Payload DTO (struct) | asyncapi.MessageSubscriberDocumentor | jsonschema.Schema |
| streamops.MessageSubscriber | asyncapi.MessageSubscriberDocumentor | asyncapi.Message |
| streamops.ChannelSubscriber | asyncapi.ChannelSubscriberDocumentor | asyncapi.Operation |
| streamops.Channel | asyncapi.ChannelDocumentor | asyncapi.Channel |
Components
Channel
The channel component represents the stream itself (SQS or Kafka topic, Redis stream, Go channel, SQLDB table, etc). It is implemented as a singleton that should be created after configuration but before start-up.
Channel Subscriber
The channel subscriber component represents the set of subscribable messages for a given stream. It is implemented as a service, and should have one of your application services as a dependency.
Message Subscriber
The message subscriber component represents one of the publishable messages for a given stream.
It is implemented as a service created after configuration but before start-up.
Notice that it has a defined API interface for mocking, and should be mocked by dependent services
during testing.
Input Port
The message port contains a mapping of fields to be set from the incoming message. Each field will be mapped from a header or body field based on the struct tags.
Payload DTO
The payload DTO will contain the parsed message that is subscribed.
Before dispatch to your subscriber, the message will be validated using
the JSON-schema annotations and any Validatable interface implementation
on your DTO.
Generation
It is strongly advised to auto-generate these components and customize them afterwards. See Channels and AsyncApi for details about generation.
AsyncApi Schema
AsyncApi Schema contains support infrastructure for generating and consuming AsyncApi specifications directly from go code.
Microservice developers will typically interact with a small subset of AsyncApi models.
Documentation Generators
Documentation Generators convert an active object from the streamops package
to a documentation element from this package.
ChannelDocumentor- Generates
ChannelItemdocumentation from astreamops.Channel
- Generates
ChannelPublisherDocumentor- Generates
Operationdocumentation from astreamops.ChannelPublisher
- Generates
ChannelSubscriberDocumentor- Generates
Operationdocumentation from astreamops.ChannelSubscriber
- Generates
MessagePublisherDocumentor- Generates
Messagedocumentation from astreamops.MessagePublisher
- Generates
Documentation Generators have the following user-modifiable properties:
Skip: Set to true if you which to skip generating documentation for this node.ChannelItem/Operation/Message: Specify a documentation element instance to use as the basis for generating documentation for this node. This is typically where you provide explanatory fields such as title, description, etc.Mutator: Specify a function to enable customization of the generated documentation after the Documentor has processed this node.
Documentation Elements
Documentation Elements are generated by documentors and are inserted into the AsyncApi specification document being generated.
Channel- Documents sequence of sent and received messages from a single location.
Message- Documents a single unit of communication sent to or received from a channel.
Operation- Documents a single channel operation (publish or subscriber) for a single channel.
Schema- Documents the format of a message payload or headers.
- In go-msx, this schema must be in JSON Schema format.
MSX LRU Cache
An LRU cache implementation which expires key/value pairs based on a TTL duration. Inspired by rcache.
- Entries are added with a key, a value and an individual TTL - time to live.
- New uses should call NewCache2
- The cache will expire entries after the TTL has passed.
- The cache checks every
ExpireFrequencyfor expired entries and expires them in batches of at mostExpireLimitat once. - The cache has no size limit. It will grow until the process runs out of memory, unless entries are expired.
- The cache is safe for concurrent access.
- The setting
DeAgeOnAccessbeing true will cause the cache to reset the TTL of an entry when it is accessed or updated, in true LRU fashion. - When this setting is
false(default for backwards compatibility) it behaves like a simple TTL cache. New uses should probably set this totrue. - When the setting
metricsistrue(defaultfalse), the cache will emit metrics. - The timeSource setting is used to provide a clock for testing purposes.
Usage
Instantiation
To create a new cache with 120 second retention, which expires up to 100 keys every 15 seconds with de-aging switched on, metrics on, with prefix "cat", and a normal time source:
myCache := lru.NewCache2(120 * time.Second, 100, 15 * time.Second, true,
clock.New(), true, "cat_")
lru provides an interface type Cache and a concrete type HeapMapCache; NewCache2 returns an instance of HeapMapCache which implements the former.
Storage
To store a key/value pair:
myCache.Set("somekey", "myvalue")
Retrieval
To retrieve a key/value pair:
value, exists := myCache.get("somekey")
if !exists {
// fill cache for "somekey"
}
Metrics
When initialized with metrics set true, the cache will emit metric events to the stats package thus:
entries: the number of entries in the cachehits: the number of cache hitsmisses: the number of cache missessets: the number of times set or setWithTTL were calledevictions: the number of times an entry was evictedgcRuns: the number of times the garbage collector was rungcSizes: a histogram of the number of entries evicted in each garbage collection rundeAgedAt: a histogram of the remaining time to live of entries when they are deaged
The metricsPrefix setting is used to prefix the metrics names in the output system.
MSX Certificate
The certificate module provides access to static and dynamic TLS/x509 certificate sources, including the following providers:
- File - load certs and keys from the filesystem
- Vault - generate and renew certs and keys from Vault
- Cache - save upstream certs and keys to disk
The certificate module also provides a common configuration parser for TLS configuration.
Sources
A source identifies the provider and provider parameters required to obtain identity and authority certificates.
Each source is defined in the configuration using a unique name (lowercase alphanumeric only).
Source properties are configured under certificate.source.<sourcename>, for example:
certificate.source:
identity:
provider: ...
property1: ...
property2: ...
property3: ...
Providers
Each source specifies a provider and its parameters. Individual providers are detailed in the following sections.
File
To specify the local filesystem as the source for certificates, use the File provider:
certificate.source:
identity:
provider: file
ca-cert-file: /etc/pki/tls/certs/ca-identity.crt
cert-file: /etc/pki/tls/certs/spokeservice.pem
key-file: /etc/pki/tls/private/spokeservice-key.pem
When a subsystem requests certificates from the identity source, it will:
- Load certificates from the filesystem for each TLS connection
While active renewal is not supported, the file provider does read in changes to the file for each connection. The cert/key files may be rotated as convenient.
Configuration Properties
| Key | Default | Required | Description |
|---|---|---|---|
ca-cert-file | - | Required | CA Certificate File, PEM format |
cert-file | - | Required | Identity Certificate File, PEM format |
key-file | - | Required | Identity Private Key File, PEM format |
Vault
To specify Vault as the source for certificates, use the Vault provider:
certificate.source:
identity:
provider: vault
path: pki/vms
role: "${spring.application.name}"
cn: "${spring.application.name}"
alt-names:
- "${network.hostname}"
- "${spring.application.name}.svc.kubernetes.cluster.local"
- "${spring.application.name}.service.consul"
ip-sans:
- "${network.outbound.address}"
When a subsystem requests certificates from the identity source, it will:
- Generate an identity certificate and private key
- Renew the identity certificate half-way through its lifetime.
Configuration Properties
| Key | Default | Required | Description |
|---|---|---|---|
path | - | Required | Vault PKI mount point |
role | - | Required | Vault PKI issuer role name |
cn | - | Required | Desired identity certificate CN |
alt-names | - | Optional | Desired identity certificate subject alternative names |
ip-sans | - | Optional | Desired identity certificate IP subject alternative names |
Note: alt-names and ip-sans will be stripped of empty entries so may include
undefined variables with empty defaults:
- ${some.undefined.variable:}
Cache
To configure a cache for a remote certificate source, use the Cache provider:
certificate.source:
identity:
provider: vault
path: pki/vms
role: "${spring.application.name}"
cn: "${spring.application.name}"
alt-names:
- "${remote.service.hostname}"
ip-sans:
- "${kubernetes.pod.ip}"
- "${remote.service.ip}"
identitycache:
provider: cache
upstream-source: identity
key-file: "/certs/${spring.application.name}-key.pem"
cert-file: "/certs/${spring.application.name}.pem"
ca-cert-file: "/etc/ssl/certs/ca-identity.crt"
When a subsystem requests certificates from the identitycache source, it will:
- Generate and store an identity certificate and private key under
/certs - Retrieve and store the authority certificate under
/etc/ssl/certs. - Renew the identity certificate half-way through its lifetime.
Configuration Properties
| Key | Default | Required | Description |
|---|---|---|---|
ca-cert-file | - | Required | CA Certificate File, PEM format |
cert-file | - | Required | Identity Certificate File, PEM format |
key-file | - | Required | Identity Private Key File, PEM format |
TLS Configuration
TLS connection configuration is used in many places in go-msx, including:
- Kafka client
- Web server
For ease of use, these configurations have been unified into a single format.
Configuration Properties
| Key | Default | Required | Description |
|---|---|---|---|
enabled | false | Optional | TLS should be enabled for this client/server |
insecure-skip-verify | false | Optional | Verify the server's certificate chain and host name |
min-version | tls12 | Optional | Minimum TLS version to support. One of: tls10, tls11, tls12, tls13 |
certificate-source | - | Optional | Server or Client certificate source. Required for server. |
cipher-suites | 1 | Optional | Cipher suites to enable. |
server-name | - | Optional | Server name to check in certificate when connecting from client. |
1 Current default cipher suites are:
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_CBC_SHA
MSX Resource Module
MSX Resource manages locating and accessing files from the source, staging, and runtime filesystems in a consistent fashion.
Filesystem
To correctly use resources, it is first important to understand the resource filesystem, and how it is used to locate files during development and inside containers.
The resource filesystem contains one or more of the following layers, if found:
- production - rooted in the Docker image at
/var/lib/${app.name} - staging - rooted at
dist/root/var/lib/${app.name}underneath the source root - source - rooted at the folder containing the repository's
go.mod
The resource filesystem will attempt to locate each of these folders and if found, will search it for your resource references.
Resource References
The primary data type of the MSX resource module is the resource reference. It represents the resource file subpath. All resource paths use the forward-slash (/) as the path component separator.
Two types of paths can be used:
- relative - No leading forward-slash (
data/my-resource.json): File path is relative to the code file consuming the reference. - absolute - Leading forward-slash (
/internal/migrate/resource.json): File path is relative to the resource filesystem root.
Obtaining a Single Resource Reference
To work with a resource you must first create a reference to it using the resource.Reference function:
func processMyResource(ctx context.Context) error {
myResourceRef := resource.Reference("data/my-resource.json")
}
This returns a resource.Ref object pointing to the specified path.
Obtaining Multiple Resource References
To retrieve multiple resource references using a glob pattern you can call the resource.References function:
func processMyResources(ctx context.Context) error {
myResourceRefs := resource.References("data/*.json")
}
This returns a []resource.Ref slice with an entry for each matching resource.
Consuming Resources
Once you have obtained one or more resource references, you can access their contents using one of its methods.
JSON
To read in the contents of the resource as JSON and unmarshal it to an object, use the Unmarshal() method:
var myResourceContents MyResourceStruct
err := resource.Reference("data/my-resource.json").Unmarshal(&myResourceContents)
Bytes
To read in the contents of the resource as a []byte, use the ReadAll() method:
data, err := resource.Reference("data/my-resource.json").ReadAll()
http.File
To open the file and return an http.File, use the Open() method:
file, err := resource.Reference("data/my-resource.json").Open()
Note that http.File also meets the requirements of the io.ReadCloser interface, and can therefore be used with io.
Retry
Retry enables attempting an operation multiple times, stopping on success (no error returned) or permanent operation failure.
Retrying
To retry an action, create a new instance of Retry via NewRetry and
call its Retry method:
tooEarly := time.Parse("2020-01-01T00:00:00Z")
tooLate := time.Parse("2020-12-31T23:59:59.999999999Z")
// Retry once an hour
r := NewRetry(ctx, RetryConfig{Attempts:10, Delay: 60 * 60 * 1000})
err := r.Retry(func() error {
now := time.Now()
if now.Before(tooEarly) {
return retry.TransientError{
Cause: errors.New("Will succeed in the future")
}
} else if now.After(tooLate) {
return retry.PermanentError{
Cause: errors.New("Will never succeed again")
}
}
return nil
})
The above retries its action once per hour, with up to 10 attempts.
If the time is before tooEarly, it will continue retrying, since it
returns a TransientError. If the time is after tooLate, it will
stop retrying, since it returns PermanentError. If the time is after tooEarly
but before tooLate, it will succeed and cease further attempts.
Retry distinguishes between Transient and Permanent errors by inspecting
the returned error instance. If it implements the failure interface,
it can be queried for transience/permanence:
type failure interface {
IsPermanent() bool
}
Permanent errors should return true from IsPermanent(), transient
errors should return false. As above, this can be handled by wrapping
the error in either PermanentError or TransientError.
Configuration Loading
RetryConfig is designed to be loaded from configuration, making it possible
to configure from static, environmental, or remote configuration
sources in a consistent fashion.
const configRootMerakiClientRetry = "meraki.client.retry"
var retryConfig retry.Config
if err := config.FromContext(ctx).Populate(&retryConfig, configRootMerakiClientRetry); err != nil {
return err
}
Configuration Examples
-
Retries without delays
r := NewRetry(ctx, RetryConfig{ Attempts: 2, Delay: 0, BackOff: 0.0, Linear: true, }) -
Retries with fixed delays (1 second)
r := NewRetry(ctx, RetryConfig{ Attempts: 2, Delay: 1000, BackOff: 1.0, Linear: true, }) -
Retries with linear delays (1, 2, 3, 4)
r := NewRetry(ctx, RetryConfig{ Attempts: 5, Delay: 1000, BackOff: 1.0, Linear: true, }) -
Retries with exponential delays (1, 2, 4, 8)
r := NewRetry(ctx, RetryConfig{ Attempts: 5, Delay: 1000, BackOff: 2.0, Linear: false, }) -
Retries with linear delay and Jitter (low random) (1, 2.452, 3.571, 4.357)
r := NewRetry(ctx, RetryConfig{ Attempts: 5, Delay: 1000, BackOff: 1.0, Linear: true, Jitter: 1000, }) -
Retries with linear delay and Jitter (extreme random) (1, 7.8, 20.3, 8.45)
With higher Jitter value you could expect greater randomness.r := NewRetry(ctx, RetryConfig{ Attempts: 5, Delay: 1000, BackOff: 1.0, Linear: true, Jitter: 20000, }) -
Retries with exponential delay and Jitter (1, 2, 4, 8) (note: jitter is negligible so this is just like exponential backoff with no jitter)
r := NewRetry(ctx, RetryConfig{ Attempts: 5, Delay: 1000, BackOff: 2.0, Linear: false, Jitter: 1, }) -
Using retry with decorator
types. NewOperation(func(ctx context.Context) error { return errors.New("a transient error") }). WithDecorator(Decorator(RetryConfig{ Attempts: 1, Delay: 10, BackOff: 2.0, Linear: false, Jitter: 1, })). Run(ctx)
Sanitize
MSX Sanitize allows request data to be pre-processed (before validation) to ensure potentially dangerous content is removed. For example XSS and arbitrary HTML can be removed from plain-text strings. MSX Sanitize also auto-sanitizes log messages.
Sanitizing Input
To explicitly sanitize a tree of data, including maps, slices, structs in-place:
if err := sanitize.Input(&mydata, sanitize.NewOptions("xss")); err != nil {
return err
}
After returning, mydata will be sanitized based on the supplied Options.
Options
Options are available for each of the sanitizers from
github.com/kennygrant/sanitize
including:
- Accents (
accents) - BaseName (
basename) - Xss (
xss) - Name (
name) - Path (
path)
Custom sanitizers provided by MSX Sanitize include:
- Secret (
secret)
Struct Tags
To specify these options on a struct field, use the san:"..." tag, for example:
type MyRequest struct {
Name string `json:"name" san:"xss"`
Description string `json:"description" san:"xss"`
Ignored string `json:"ignored" san:"-"`
}
In this struct, Name and Description fields indicate they must be sanitized for XSS/HTML content (xss),
and Ignored should not be sanitized at all (-).
NOTE: If a struct field does not have the san tag, it will inherit from its ancestors, up to the options passed
into the sanitize.Input call.
Sanitizing Logs
Logs are auto-sanitized using some base rules. These can be augmented by the microservice using the
sanitize.secrets configuration:
sanitize.secrets:
keys:
- status
custom:
enabled: true
patterns:
- from: "\\[userviceconfiguration/\\w+\\]"
to: "[userviceconfiguration/...]"
- from: "\\[secret/\\w+\\]"
to: "[secret/...]"
Within sanitize.secrets you can configure the following options:
| Key | Default | Required | Description |
|---|---|---|---|
enabled | true | Optional | Enable secret replacement |
keys | - | Optional | A set of XML/JSON/ToString attributes and objects to flag as sensitive |
custom.* | - | Optional | Custom go regex replacement. Does not use keys. |
json.* | - | Optional | JSON replacement. Replaces once per entry in keys. |
xml.* | - | Optional | XML replacement. Replaces once per entry in keys. |
to-string.* | - | Optional | Stringer replacement. Replaces once per entry in keys. |
For custom, specify a list of regexes and replacements in custom.patterns, as above.
| Key | Default | Required | Description |
|---|---|---|---|
custom.patterns[*].from | - | Required | Regex to match |
custom.patterns[*].to | - | Required | Replacement (including variables) |
For json, xml, tostring, specify a list of regexes to match, including the named capture groups
prefix and postfix:
| Key | Default | Required | Description |
|---|---|---|---|
.enabled | true | Optional | Enable this set of patterns (json, xml, to-string) |
.patterns[*].from | - | Required | Regex to match |
.patterns[*].to | ${prefix}*****${postfix} | Optional | Replacement (including regex variables) |
MSX Scheduled Module
MSX Scheduled manages the periodic execution of tasks within microservices.
Tasks
The work to be performed on a periodic basis must be surrounded in an Action (a function signature matchingtypes.ActionFunc):
func doWork(ctx context.Context) error {
// TODO: perform the desired work.
}
Actions can be anonymous functions, struct methods (as above), or static methods, and can also be derived from Operations (types.Operation).
Scheduling
Scheduling a task requires two steps: Configuration and Registration.
Configuration
To configure the periodic execution, your task will need a simple name to identify its configuration. For example, the do-work task can be configured as:
scheduled.tasks:
do-work:
fixed-interval: 10m
# fixed-delay: 5m
# initial-delay: 15m
# cron-expression: "0 0 0 * *"
This example configuration will execute the do-work task (once registered) every 10 minutes.
To ensure a fixed period between executions, use the fixed-delay configuration instead.
To specify an initial delay before first execution that is different from fixed-delay or fixed-interval, specify the initial-delay.
To use a CRON expression to specify the execution schedule, use the cron-expression configuration. For an overview of CRON expressions, see here.
Registration
To register your task at runtime, call the scheduled.ScheduleTask function during the application Start:
const taskNameDoWork = "do-work"
func init() {
app.OnRootEvent(app.EventStart, app.PhaseAfter, func(ctx context.Context) error {
return scheduled.ScheduleTask(ctx, taskNameDoWork, doWork)
})
}
This will load the configuration using the supplied task name, and schedule the task according to the configuration.
MSX Transit Module
MSX transit is an implementation of transit encryption. It allows swappable encryption implementations via the Provider interface.
Usage
The primary mode of consumption for MSX Transit is within business Models. To add transit encryption support to your model, add an anonymous transit.WithSecureData member to your model structure:
type Organization struct {
transit.WithSecureData
OrganizationId string `db:"organization_id"`
TenantId gocql.UUID `db:"tenant_id"`
}
This embedded struct will store its data in a field named secure_data, so a migration will need to add such a field if the table already exists:
ALTER TABLE organization ADD COLUMN secure_data TEXT;
To store and retrieve individual encrypted fields from your model, add accessors:
const secureDataMerakiApiKey = "merakiApiKey"
func (o *Organization) MerakiApiKey(ctx context.Context) (string, error) {
return o.SecureValue(ctx, organizationSecureFieldMerakiApiKey)
}
func (o *Organization) SetMerakiApiKey(ctx context.Context, value *string) error {
return o.SetSecureValue(ctx, o.TenantId.Bytes(), secureDataMerakiApiKey, value)
}
You can then use these accessors in your converter and services to retrieve and store the values from your model.
Per-Application Encryption
Sometimes you will want values to be encrypted, but non on a per-tenant basis. In this case, define your key id within your domain package, and use it in place of the TenantId in your property setters:
var appKeyId := types.MustParseUUID("3e246fc7-12d8-4626-a739-1fd22bbf47f0")
func (o *Organization) SetMerakiApiKey(ctx context.Context, value *string) error {
return o.SetSecureValue(ctx, appKeyId.Bytes(), secureDataMerakiApiKey, value)
}
Introduction
Skel is a tool for generating MSX service skeletons and components. It is a part of the go-msx library and tools, and the skeleton projects it generates are compatible with the go-msx framework.
Installation
You may install Skel either by cloning the git repo and using golang's install command, or by copying the 'skel' binary from the repo's 'bin' directory; the latter is recommended since you will need to update it from time to time.
In either case, you will need to ensure that Git is set up and can communicate with the cto-github.cisco.com server. See the go-msx README for details.
Install from Artifactory
-
Download the skel tarball:
MacOS:
curl -L -o - https://engci-maven-master.cisco.com/artifactory/symphony-group/com/cisco/vms/go-msx-skel/latest/go-msx-skel-darwin-latest.tar.gz | tar -xzf -Linux:
curl -L -o - https://engci-maven-master.cisco.com/artifactory/symphony-group/com/cisco/vms/go-msx-skel/latest/go-msx-skel-linux-latest.tar.gz | tar -xzf - -
Move the skel binary to a location in your path:
mv skel ~/go/bin
Install via Go
Prerequisite: Go 1.18+
-
Ensure your GOBIN environment variable is correctly set and referenced in your PATH. For example:
export GOPATH=~/go export PATH=$PATH:$GOPATH/binRecall that GOBIN defaults to
$GOPATH/bin, or$HOME/go/binif theGOPATHenvironment variable is not set. -
Be sure to set your Go proxy settings correctly. For example:
go env -w GOPRIVATE=cto-github.cisco.com/NFV-BU -
Check-out go-msx into your local workspace:
mkdir -p $HOME/msx && cd $HOME/msx git clone git@cto-github.cisco.com:NFV-BU/go-msx.git cd go-msx go mod download -
Install
skel:make install-skel
Usage
Skel may be run using either command-line sub-commands or by using its minimal, but hopefully helpful, interactive mode.
-
To start the interactive project generator, run the skel command with no arguments:
skel -
To list the targets and options for the skel command, add the
-hflag:skel -h -
To get help for a particular target:
skel <target> -h
In addition to the numerous generation targets, there are the following utility targets:
help: display the help textversion: display the current, and most recentskelbuild versionscompletion: generate the BASH completion script forskel
Generic Microservice
- Contains: {wazinnit}
- Root dir: ./{serviceName}/
- Command:
generate-app - Menu: Generate Archetype | Generic Microservice
A generic MSX app skeleton that contains various bony bits ...
Probes (Beats)
Probes enable the collection of operational data from remote systems on a per-device basis.
-
To generate an operational data collector:
skel? Generate archetype: Beat ? Project Parent Directory: /Users/mcrawfo2/msx/demos ? Version: 5.0.0 ? Protocol: arp ? Build notifications slack channel: go-msx-build ? Primary branch name: main
For more details about creating and implementing probes, please see the Writing a New Beat tutorial.
Channels
-
To interactively generate a channel publisher or subscriber, for one or more messages:
skel generate-channel -
To generate a channel supporting a single message publisher:
skel generate-channel-publisher "COMPLIANCE_EVENT_TOPIC" -
To generate a channel supporting multiple message publishers, or add another message publisher to an existing multi-message publisher channel:
skel generate-channel-publisher "COMPLIANCE_EVENT_TOPIC" --message "DriftCheck" -
To generate a channel supporting a single message subscriber:
skel generate-channel-subscriber "COMPLIANCE_EVENT_TOPIC" -
To generate a channel supporting multiple message subscribers, or add another message subscriber to an existing multi-message subscriber channel:
skel generate-channel-subscriber "COMPLIANCE_EVENT_TOPIC" --message "DriftCheck"
Files
From the above examples, the following files may be generated:
pkg.go- Package-wide logger
- Context Key type definition
- Channel for
COMPLIANCE_EVENT_TOPIC - Channel documentation (
asyncapi.Channel)
publisher_channel.go- Channel publisher for the package channel
- Channel publisher documentation (
asyncapi.Operation)
subscriber_channel.go- Channel subscriber for the package channel
- Channel subscriber documentation (
asyncapi.Operation)
publisher_*.go- Message publisher for individual outgoing messages
- Message publisher documentation (
asyncapi.Message)
subscriber_*.go- Message subscriber for individual incoming messages
- Message subscriber documentation (
asyncapi.Message)
api/*.go- DTOs for published messages (eg
DriftCheckRequest)
- DTOs for published messages (eg
AsyncAPI
-
To interactively generate a channel publisher or subscriber, for one or more channels from an existing AsyncApi specification via url or local path:
skel generate-channel-asyncapi -
To generate a consumer for channels from an existing AsyncApi specification via url:
export ASYNCAPI_SPEC_URL="https://cto-github.cisco.com/raw/NFV-BU/merakigoservice/develop/api/asyncapi.yaml?token=..." skel generate-channel-asyncapi "$ASYNCAPI_SPEC_URL" COMPLIANCE_EVENT_TOPIC -
To generate a consumer for channels from an existing AsyncApi specification from a local specification:
skel generate-channel-asyncapi "api/asyncapi.yaml" COMPLIANCE_EVENT_TOPIC -
To generate a consumer for channels from an existing AsyncApi specification via url:
export ASYNCAPI_SPEC_URL="https://cto-github.cisco.com/raw/NFV-BU/merakigoservice/develop/api/asyncapi.yaml?token=..." skel generate-channel-asyncapi "$ASYNCAPI_SPEC_URL" COMPLIANCE_EVENT_TOPIC -
To generate a consumer for channels from an existing AsyncApi specification from a local specification:
skel generate-channel-asyncapi "api/asyncapi.yaml" COMPLIANCE_EVENT_TOPIC
Skel Skaffold Integration
Generate Project Skaffold Support
A skaffold.yaml file will be created in the root of any app or service pack project you create using skel, no extra action is required.
In addition, deployments/kubernetes/minivms/${app.name}-deployment.yaml and deployments/kubernetes/msxlite/${app.name}-deployment.yaml will be created.
If your project has already been generated, you can use the skel generate-kubernetes command inside the project folder to add the skaffold support files.
Configure Kubernetes
- Connect to your msx-lite instance (the kubernetes host, not the installer), and retrieve your kubernetes configuration:
kubectl config view --raw --minify
- Apply this configuration as either the default kubernetes configuration, or as a custom configuration referred to by the
KUBECONFIGenvironment variable:
mkdir -p $HOME/.kube
cat > $HOME/.kube/config <<EOF
<config contents from instance go here>
EOF
- Update the server URL in the kubeconfig file you just saved to refer to the lab IP address:
# server: https://127.0.0.1:6443
server: https://10.81.85.174:6443
- If using a non-default config file, ensure
KUBECONFIGis set in your bash profile to point to the new file:
export KUBECONFIG=$HOME/.kube/rtp-4-msx-lite-35/config
Setup Skaffold Support in GoLand
To set up skaffold in GoLand:
-
Install Skaffold: Follow the instructions at https://skaffold.dev/docs/install/ to install Skaffold 2.x or higher on your system.
-
Install the Skaffold plugin for GoLand: In GoLand, go to
Goland | Settings | Plugins..., search for "Cloud Code", and click the Install button -
You may need to restart GoLand
Create an MSX-Lite Run Configuration
-
When you open your generated project in GoLand you should now see a popup saying "Kubernetes with Cloud Code. Skaffold configuration detected" since there will be a skaffold.yaml in the root
-
Via the
Add Configurationlink therein, or via the light blueAdd Configurationbutton top right, or viaRun | Edit Configurations | +add a run config -
Select the config type: "Cloud Code: Kubernetes"
-
Give the configuration a name
-
On the run tab,
Environment Variablesspecify:SKAFFOLD_PROFILE=msxlitethis tells skaffold to use the msxlite deployment found in the msxlite subdir
-
Give the path to the
skaffold.yamlfile on the "Build | Deploy" tab (it should default correctly) -
You probably want "All Modules and Dependencies" selected
-
Now you can run that config to deploy using skaffold
Makefile Usage
go-msx uses GNU Make to present abstract build targets for developers and Continuous Integration systems. This allows for consistent builds across a variety of environments, and development of Continuous Integration without a hosted job runner.
make may be run directly to execute targets.
-
To list the targets in the Makefile, execute the
helptarget:make helpSample output is shown below.
-
To pass flags to the
gocommand when executingbuild:export BUILDER_FLAGS='-exec xprog' make vet -
To pass flags to the
buildcommand when executingbuild:export BUILD_FLAGS='--artifactory.password="cisco123"' make publish
In addition to the numerous build targets (below), there are the following utility targets:
help: display the help text
Targets
assemblies Generate supplemental artifacts
clean Remove any temporary build outputs
debug Build a debug executable
deploy-github-repo Configure a standard github repository
deploy-jenkins-job Upload a standard Jenkins build job to MSX Jenkins
deployment Generate the installer deployment variables
deps Install dependencies
dist Build all outputs required for a container image
docker Generate a docker image for this service
docker-debug Generate a debugging docker image for this service
docker-publish Publish a docker image for this service
generate Execute code generation
help Show this help
manifest Generate the installer manifest
openapi-compare Compare the openapi contracts for the microservice
openapi-generate Store the openapi contract for the microservice
package Generate an SLM package
package-deploy Deploy this service using SLM to an MSX instance
package-publish Publish this service as an SLM package to S3
precommit Ensure the code is ready for committing to version control
publish Publish all artifacts required for the installer
tag Tag the repository with a new PATCH version number
test Execute unit tests
update-go-msx Update the go-msx library dependency to the latest version
update-go-msx-build Update the go-msx-build library dependency to the latest version
update-go-msx-populator Update the go-msx-populator library dependency to the latest version
verify-contract Ensure the openapi contract matches the generated code
vet Use go vet to validate sources
Build Usage
build may be run directly using command-line targets.
-
To list the targets and options for the build command, add the
-hflag:go run cmd/build/build.go --config cmd/build/build.yml -h -
To get help for a particular target:
go run cmd/build/build.go --config cmd/build/build.yml <target> -h -
To pass a custom build configuration, use the
--configoption:go run cmd/build/build.go --config cmd/build/build-custom.yml <target>
In addition to the numerous build targets (below), there are the following utility targets:
help: display the help textversion: display the current, and most recentskelbuild versionscompletion: generate the BASH completion script forskel
Targets
Available Commands:
build-assemblies Builds Assemblies
build-debug-executable Build the binary debug executable
build-executable Build the binary executable
build-installer-manifest Generate the installer manifests
build-package Build the service deployment package
build-tool Build the binary tool
compare-openapi-spec Compares the current openapi spec with the stored version
completion Generate the autocompletion script for the specified shell
deploy-github-repo Deploy Github repository
deploy-jenkins-job Deploy Jenkins job
deploy-package Deploy the service to an MSX instance
docker-build Build the target release docker image
docker-build-debug Build the target debug docker image
docker-push Push the target docker image to the upstream repository
docker-save Save the target docker image to the specified file
download-generate-deps Download generate dependencies
download-seccomp-dependencies Download seccomp dependencies
download-test-deps Download test dependencies
execute-unit-tests Execute unit tests
generate Generate code
generate-build-info Create a build metadata file
generate-deployment-variables Stage variables file with build version
generate-openapi-spec Stores the current openapi spec into a file
generate-seccomp-profile Create a seccomp profile
git-tag Tag the current commit
go-fmt Format all go source files
go-vet Vet all go source files
help Help about any command
install-asyncapi-ui Installs AsyncAPI/Studio package
install-dependency-configs Download dependency config files to distribution config directory
install-entrypoint Copy custom entrypoint to distribution root directory
install-executable-configs Copy configured files to distribution config directory
install-extra-configs Copy custom files to distribution config directory
install-resources Installs Resources
install-swagger-ui Installs Swagger-UI package
license License all go source files
publish-binaries Publishes Binaries
publish-installer-manifest Deploy the installer manifests
publish-package Publish the service deployment package
publish-tool Publish the binary tool
Build Configuration
go-msx Build uses YAML build configuration files to define the build to be executed. A build configuration describes the build metadata for one of:
- Microservice
- Command Line tool
- Service Pack
- Library
A build configuration file can include these artifacts:
- Binary artifacts
- Assemblies (tarballs)
- Resources
- Runtime Configuration Files
- Docker Images
Configuration Sources
Like all go-msx applications, go-msx-build can retrieve configuration from a variety of sources:
- Environment
- Command-Line Options
- Build Configuration Files
- Application Configuration Files
- Defaults
To specify the primary build configuration file, pass the --config option to build:
go run cmd/build/build.go --config cmd/build/build.yml
This will normally be handled by the Makefile.
Configuration passed in by either Environment Variables or Command-Line Options will override values also specified in Files or Defaults.
Environment Variables
Some settings are intended to be injected from environment variables. These include:
docker.username(DOCKER_USERNAME)docker.password(DOCKER_PASSWORD)artifactory.username(ARTIFACTORY_USERNAME)artifactory.password(ARTIFACTORY_PASSWORD)build.number(BUILD_NUMBER)manifest.folder(MANIFEST_FOLDER)jenkins.username(JENKINS_USERNAME)jenkins.password(JENKINS_PASSWORD)github.token(GITHUB_TOKEN)
It is considered unsafe or inflexible to store them directly in the configuration file. The default generated Jenkinsfile will automatically inject these environment variables as required by the relevant steps.
Application Configuration
Some settings below are intended to be read from the application configuration files.
These include:
info.app.*-bootstrap.ymlserver.*-bootstrap.yml
To ensure these are being read from the correct source, ensure the executable.config-files list
contains the base application configuration files (eg bootstrap.yml).
Example:
executable:
configFiles:
- bootstrap.yml
Configuration Sections
executable
The executable configuration specifies the entrypoint and primary configuration file(s) of this build.
| Key | Default | Required | Description |
|---|---|---|---|
executable.cmd | app | Optional | The cmd sub-folder containing the application main module. |
executable.config-files | - | Required | A list of configuration files within the main module to include in the build. |
Example:
executable:
configFiles:
- bootstrap.yml
- dnaservice.production.yml
msx
The msx configuration specifies details of the MSX release to interface with.
| Key | Default | Required | Description |
|---|---|---|---|
msx.release | - | Required | The MSX release of this microservice (output). |
msx.deployment-group | - | Required | The deployment group of the build. |
msx.platform.parent-artifacts | - | Required | Maven artifact roots to scan for default properties. |
msx.platform.version | - | Required | The platform version to use for locating maven artifacts. Accepts EDGE and STABLE builds. |
msx.platform.include-groups | - | Required | Maven artifact groupIds to include in artifact scanning. |
msx.platform.swagger-artifact | com.cisco.nfv:nfv-swagger | Optional | MSX artifact groupId and artifactId for nfv-swagger. |
msx.platform.swagger-webjar | org.webjars:swagger-ui:3.23.11 | Optional | Maven artifact triple for swagger web jar. |
Example:
msx:
release: 3.10.0
deploymentGroup: dna
platform:
parentArtifacts:
- com.cisco.vms:vms-service-starter
- com.cisco.vms:vms-service-starter-core
- com.cisco.vms:vms-service-starter-kafka
- com.cisco.nfv:nfv-integration-consul-leader
version: 3.10.0-EDGE
includeGroups: "com.cisco.**"
docker
The docker configuration controls interactions with the docker daemon, global repository,
images, and Dockerfile scripts.
| Key | Default | Required | Description |
|---|---|---|---|
docker.dockerfile | docker/Dockerfile | Optional | The Dockerfile used for this build. |
docker.baseimage | msx-base-buster:3.9.0-70 | Optional | The base image within the repository. |
docker.repository | dockerhub.cisco.com/vms-platform-dev-docker | Optional | The repository source and destination. |
docker.username | - | Optional | User name to authenticate to repository. |
docker.password | - | Optional | Password to authenticate to repository. |
docker.buildkit | - | Optional | true to use docker buildkit when building the docker image. |
docker.base.dynamic.enabled | true | Optional | true to use manifests to dynamically locate the base docker image. |
docker.base.dynamic.stream | EI-Stable | Optional | Manifest stream to search within for manifests |
docker.base.dynamic.version | ${msx.release} | Optional | MSX release to search within for manifests |
docker.base.dynamic.manifest | msxbase-bullseye-manifest | Optional | MSX manifest to search within for builds |
docker.base.dynamic.image | msx-base-image | Optional | Manifest key identifying the image to use |
Example:
docker:
dockerfile: build/package/Dockerfile
kubernetes
The kubernetes configuration provides defaults for generating kubernetes manifests.
| Key | Default | Required | Description |
|---|---|---|---|
kubernetes.group | platformms | Optional | The kubernetes group used for pods in production. |
manifest
The manifest configuration specifies how to build and publish installer manifests.
| Key | Default | Required | Description |
|---|---|---|---|
manifest.folder | Build-Stable | Optional | The maven output folder to publish the manifest to. |
resources
The resources section identifies the files to be included as part of the docker image.
Each entry has the following properties:
| Key | Default | Required | Description |
|---|---|---|---|
resources.includes | - | Optional | List of globs of files to include. Processed first. |
resources.excludes | - | Optional | List of globs of files to exclude. Processed second. |
Example:
resources:
includes:
- "/internal/migrate/**/*.sql"
- "/internal/populate/**/*"
excludes:
- "/internal/populate/**/*.go"
assemblies
The assemblies configuration specifies .tar file generation. The .tar files will be included in generated
manifests and published (unless disabled).
| Key | Default | Required | Description |
|---|---|---|---|
assemblies.root | platform-common | Optional | The folder from which assemblies are created by default. All sub-folders with a 'templates' folder or 'manifest.json' are included. |
assemblies.custom | - | Optional | List of custom assemblies to include. See below |
Example:
assemblies:
root: platform-common
assemblies.custom
The assemblies.custom setting contains a list of custom assemblies to generate. These
will be uploaded to artifactory and recorded as binaries in the manifest, unless disabled
with artifactory.assemblies.
Each entry in this list has the following properties:
| Key | Default | Required | Description |
|---|---|---|---|
path | - | Required | The root path of the assembly files. |
path-prefix | - | Optional | A folder inside the assembly to prefix the files during the build. |
manifest-prefix | - | Required | The prefix of the file name in the manifest. |
manifest-key | - | Required | The location of the entry in the JSON manifest. |
includes | /**/* | Optional | Glob of files to include. Processed first. |
excludes | - | Optional | Glob of files to exclude. Processed second. |
Example:
To create an assembly file called "skyfallui-files-${release}-${build}.tar":
assemblies:
custom:
- path: ui/build
pathPrefix: services
manifestPrefix: skyfallui-files
manifestKey: ${msx.deploymentGroup}-ui
- Each file from the
ui/buildsubtree will be prefixed with theservicesfolder in the output tar. e.g. 'ui/build/dna/index.js' will be relocated toservices/dna/index.js. - The assembly will be added to the generated artifact manifests at e.g.
dna-ui.
artifactory
The artifactory configuration specifies artifactory connectivity, folders, binaries, and images.
| Key | Default | Required | Description |
|---|---|---|---|
artifactory.assemblies | true | Optional | Include assemblies in publishing and manifests |
artifactory.installer | deployments/kubernetes | Optional | The folder in which installer binaries can be found. eg pod, rc, meta templates. |
artifactory.repository | https://.../vms-3.0-binaries | Optional | The base url for storing published artifacts |
artifactory.installer-folder | binaries/vms-3.0-binaries | Optional | The folder prefix of binaries to record in the manifest |
artifactory.username | - | Optional | The user name with which to authenticate to Artifactory. |
artifactory.password | - | Optional | The password with which to authenticate to Artifactory. |
artifactory.custom | - | Optional | List of custom binaries to include. See below |
artifactory.images | - | Optional | List of docker images to include. |
Example:
artifactory:
installer: deployments/production
images:
- nacservice
artifactory.custom
The artifactory.custom setting contains a list of custom binaries to include. These
will be uploaded to artifactory and recorded in the manifest.
Each entry in this list has the following properties:
| Key | Required | Description |
|---|---|---|
path | Required | The source path of the file to include. |
output-name | Required | The destination name of the file. |
manifest-prefix | Required | The prefix of the file name in the manifest. |
manifest-key | Required | The location of the entry in the JSON manifest. |
Example:
artifactory:
custom:
- path: deploymentvariables/nac_deployment_variables.yml
outputName: nac_deployment_variables.yml
manifestPrefix: deployment-variables
manifestKey: deployment_variables
- path: deploymentvariables/nac_variables.yml
outputName: nac_variables.yml
manifestPrefix: variables
manifestKey: variables
generate
The generate configuration specifies paths to be generated by the generate target.
Mostly these will refer to folders on which to run go generate to produce mocks.
Specialized generators such as openapi are described in following sections.
NOTE: Embedding using vfs has been deprecated in favor of go:embed which executes
during the go build process.
| Key | Default | Required | Description |
|---|---|---|---|
generate[*].path | - | Required | Folder to generate outputs |
generate[*].command | go generate | Optional | Command to execute |
generate[*].openapi
The openapi generator can be used to auto-generate OpenApi clients from contract specifications.
| Key | Default | Required | Description |
|---|---|---|---|
generate[*].openapi.spec | - | Required | Consumer contract location |
generate[*].openapi.config | - | Required | OpenApi client generator config file |
go
The go configuration specifies environment variables and options to be passed to Go tools during the build.
| Key | Description |
|---|---|
go.env.all.* | Environment variables for all platforms |
go.env.linux.* | Environment variables for linux platform |
go.env.darwin.* | Environment variables for darwin (MacOS) platform |
go.vet.options[*] | List of command line options to pass to go vet |
build
The build configuration specifies information about the build used to generate buildinfo.yml.
| Key | Default | Required | Description |
|---|---|---|---|
build.number | SNAPSHOT | Required | The build number of this build. |
build.group | com.cisco.msx | Optional | The build group. |
info.app
The info.app configuration specifies details about the application used across all parts of the build.
| Key | Default | Required | Description |
|---|---|---|---|
info.app.name | - | Required | The name of the application being built. |
info.app.attributes.display-name | - | Required | The display name of the application being built. |
Example:
info.app:
name: dnaservice
attributes:
displayName: DNA Microservice
server
The server configuration specifies details about the web server used across all parts of the build.
| Key | Default | Required | Description |
|---|---|---|---|
server.port | - | Required | The web server port of the application being built. |
Example:
server:
port: 9393
jenkins
The jenkins configuration specifies details about the Jenkins CI server used by the project.
| Key | Default | Required | Description |
|---|---|---|---|
jenkins.job | - | Optional | The simplified job path to the Jenkins Job on the server. |
jenkins.server | https://jenkins.infra.ciscomsx.com | Optional | The base url of the Jenkins CI server. |
jenkins.username | - | Optional | User name to authenticate to Jenkins. |
jenkins.password | - | Optional | API Token to authenticate to Jenkins. Can be created on the User Configure page in Jenkins UI. |
Example:
jenkins.job: eng-sp-umbrella/builds/umbrellaservice
github
The github configuration specifies details about the GitHub Source Control server used by the project.
| Key | Default | Required | Description |
|---|---|---|---|
github.repository | '${spring.application.name}' | Optional | The name of the repository on the server. |
github.organization | 'NFV-BU' | Optional | The owner of the repository on the server. |
github.server | https://cto-github.cisco.com | Optional | The base url of the GitHub server. |
github.token | - | Optional | API Token to authenticate to GitHub. Can be created on the User Settings > Developer Settings > Personal Access Tokens page in the GitHub UI. |
github.hook.push | ${jenkins.server}/github-webhook/ | Optional | Github Push Webhook to configure on the repository. |
github.hook.pull-request | ${jenkins.server}/ghprbhook/ | Optional | Github PR Webhook to configure on the repository. |
github.teams.jenkins | Jenkins-generic-users | Optional | GitHub CI Team to assign write access to the repository. |
github.teams.eng | - | Optional | GitHub Engineering Team to assign write access to the repository. |
Example:
github.organization: xiaoydu
aws
The aws configuration specifies credentials and target details for AWS.
| Key | Default | Required | Description |
|---|---|---|---|
aws.access-key-id | ${aws.access.key.id} | Optional | Access Key Id to authenticate to AWS. |
aws.secret-access-key | ${aws.secret.access.key} | Optional | Secret Access Key to authenticate to AWS. |
These values default to the standard environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) so
no extra configuration should be required if using them.
deploy
The deploy configuration specifies the target for package deployment.
| Key | Default | Required | Description |
|---|---|---|---|
deploy.host | - | Required | SSH config host name to target for deployment. Must point to an installer container. |
openapi
The openapi configuration specifies producer and consumer contract locations (local and upstream),
along with schema resolution aliases.
| Key | Default | Required | Description |
|---|---|---|---|
| openapi.spec | api/openapi.yaml | Optional | The project-root relative location of the producer contract specification. |
| openapi.contracts[*].consumer | - | Required | Local copy of the consumer contract for client generation. |
| openapi.contracts[*].producer | - | Required | Upstream copy of the producer contract for client generation. |
| openapi.alias[*].from | - | Required | Canonical schema url to be loaded from an alternative source |
| openapi.alias[*].to | - | Required | Alternative source location for schema |
Example:
# Contract Management
openapi:
# Local (producer) API contract
spec: api/openapi.yaml
# Remote (consumer) API contract pairs
contracts:
- consumer: internal/stream/.openapi/manage-service-8.yaml
producer: https://cto-github.cisco.com/raw/NFV-BU/msx-platform-specs/develop/manage-service-8.yaml
# Alternative sources for well-known schema
alias:
- from: https://api.swaggerhub.com/domains/Cisco-Systems46/msx-common-domain/8
to: https://cto-github.cisco.com/raw/NFV-BU/msx-platform-specs/sdk1.0.10/common-domain-8.yaml
Build Targets
go-msx Build has a collection of default build targets encompassing standard build steps that can be reused in a project's Makefile. The following chapters describe each of the standard build targets.
Users can also define custom build targets for project-specific needs.
Custom Build Targets
Build targets can be added using the build.AddTarget function. This will register a
CLI handler function for a new build target. Build configuration can be accessed
vi the pkg.BuildConfiguration global variable from the handler function. Ensure your
module containing the custom build target is initialized at startup by either:
- Defining your build target in the build
mainpackage of your project; or - Importing the module containing your custom build target from the build
mainpackage of your project.
Example:
package main
import build "cto-github.cisco.com/NFV-BU/go-msx-build/pkg"
var myCustomTargetFlag bool
func init() {
cmd := build.AddTarget("custom-target", "A custom build target", customTarget)
cmd.Flags().BoolVarP(&myCustomTargetFlag, "enabled", "e", false, "Custom target option")
}
func customTarget(args []string) error {
// custom target steps go here
if myCustomTargetFlag {
// ...
}
return nil
}
Project Maintenance Build Targets
deploy-github-repo
The deploy-github-repo target will create and/or reconfigure a GitHub repository with the appropriate users
and webhooks.
Target server and repository is configured via the github.* build settings.
deploy-jenkins-job
The deploy-jenkins-job target will upload your jenkins job as defined in the build/ci folder from config.xml.
This file will normally be auto-generated.
Target server and job is configured via the jenkins.* build settings.
update-go-msx
The update-go-msx target will attempt to update your go-msx library to the latest version.
update-go-msx-build
The update-go-msx-build target will attempt to update your go-msx-build library to the latest version.
update-go-msx-populator
The update-go-msx-populator target will attempt to update your go-msx-populator library to the latest version.
Development Targets
download-generate-deps
The download-generate-deps target installs cross-project generation dependencies, including:
- github.com/rust-lang/mdBook
- github.com/badboy/mdbook-mermaid
- github.com/vektra/mockery/v2
generate
The generate target will execute any custom (or default) generate commands defined in
the generate.* entries.
If no command is specified for an entry, it will default to running go generate on that folder.
Generate commands can also be specified using go:generate comments.
Generation will be executed when generate executes on the directory containing files with these comments.
go-fmt
The go-fmt target executes go fmt on directories which contain *.go files (excluding the vendor directory).
license
The license target verifies that all go source code files contain the appropriate Cisco license header.
update-openapi-producer-spec
The update-openapi-producer-spec target will obtain the latest version of the
microservice producer OpenApi contract specification and overwrite the stored version.
Producer specification file is configured via the openapi.spec build setting.
update-openapi-consumer-spec
The update-openapi-consumer-spec target will obtain the latest version of the
consumer OpenApi contract specification and overwrite the stored version.
Consumer local and remote specification are configured via the openapi.contracts[*] build setting.
Artifacts Build Targets
build-assemblies
The build-assemblies target collects folders into tarballs and places their output into the staging
assembly folder (dist/assembly).
For each entry in assembly.custom.*, the target will create a tar file named ${prefix}-${release}-${build}.tar.
build-debug-executable
The build-debug-executable target compiles the main module of the current application and outputs it
to the staging executable folder (dist/root/usr/bin). Unlike build-executable, however it outputs a
binary suitable for debugging. This can be included in a container to remotely debug the application.
build-executable
The build-executable target compiles the main module of the current application and outputs it to
the staging executable folder (dist/root/usr/bin).
Flags passed to go build can be customized using the go.env.*.GOFLAGS configuration.
build-installer-manifest
The build-installer-manifest target will create install manifests for integration servers and production
installation. The contents of the manifests will be dynamically generated from the artifactory and assemblies
configuration, along with the docker image generated from the current build configuration.
To deploy the manifest artifact use the publish-installer-manifest target.
build-package
The build-package target will generate a Service package to be uploaded and deployed by the service pack deployer.
It will include the standard service contents, including assemblies, manifests, images, deployment variables, and
other binaries.
To deploy the package artifact use the publish-package target.
build-tool
The build-tool target will compile and generate a Tool binary distribution (.tar.gz) to be uploaded and deployed
to Artifactory (or elsewhere). It will include the binary and any resources defined in the tool configuration
section.
NOTE: These binaries are statically compiled and therefore must not be distributed.
docker-build
The docker-build target will create a docker image for the current build configuration. The contents of the
image are stages using make dist inside a build container, and then deployed onto an MSX base image to create
the runtime container image.
The base image can be specified using the docker.repository and docker.base-image configuration settings.
The docker image will be named in the format ${docker.repository}/${info.app.name}:${release}-${build}.
docker-build-debug
The docker-build-debug target will create a debugging docker image for the current build configuration.
The contents of the image are stages using make dist inside a build container, and then deployed onto an
MSX base image to create the runtime container image.
The base image can be specified using the docker.repository and docker.base-image configuration settings.
The docker image will be named in the format ${docker.repository}/${info.app.name}:${release}-${build}.
download-generate-deps
The download-generate-deps target installs cross-project generation dependencies, including:
- github.com/vektra/mockery
- bou.ke/staticfiles
download-seccomp-dependencies
The download-seccomp-dependencies target installs the seccomp-profiler for generating seccomp profiles.
See generate-seccomp-profile, below.
generate-build-info
The generate-build-info target creates the build-specific metadata file buildinfo.yml, including version information,
and build timestamps.
The metadata file is generated directly into the staging configuration folder (dist/root/etc/${app.name}).
This file will be parsed on MSX Application startup during the configuration phase, and used to register the
service metadata with Consul.
Default values for the info.build fields should be specified in the application bootstrap.yml file to enable
local development before generating the metadata file.
generate-deployment-variables
The generate-deployment-variables target creates a YAML ansible variables file compatible with the MSX installer.
This file will be published during publish.
generate-seccomp-profile
The generate-seccomp-profile target creates the configuration file seccomp.yml, listing the expected set of linux
syscalls to be allowed during execution. This prevents a compromised executable from making unauthorized syscalls.
install-asyncapi-ui
The install-asyncapi-ui target downloads the AsyncApi Studio package and extracts the
relevant files to the staging web folder (dist/root/var/lib/${app.name}/www)
install-dependency-configs
The install-dependency-configs target scans maven artifacts for default-*.properties files and copies them
into the staging configuration folder (dist/root/etc/${app.name}). At runtime, a go-msx microservice will read these
files ensuring MSX microservices across frameworks have the same default configuration.
install-executable-configs
The install-executable-configs target copies configuration files from the main module of the application being
built to the staging configuration folder (dist/root/etc/${app.name}).
The list of configuration files to be copied is specified in the build configuration at executable.config-files:
executable:
configFiles:
- bootstrap.yml
- dnaservice.production.yml
install-extra-configs
install-resources
The install-resources target copies static files from the project tree to the staging resources folder
(dist/var/lib/${app.name}).
The list of resources to be copied is specified in the build configuration at resources.*.
install-swagger-ui
The install-swagger-ui target downloads the Swagger UI webjar and MSX Swagger artifacts and extracts the
relevant files to the staging web folder (dist/root/var/lib/${app.name}/www)
Verification Build Targets
compare-openapi-consumer-spec
The compare-openapi-producer-spec target will obtain the latest version of a
consumer OpenApi contract specification, as identified in the build configuration.
After obtaining the latest contract, it will compare it with the stored version,
and generate a report of the differences.
Consumer local and remote specification are configured via the openapi.contracts[*] build setting.
compare-openapi-producer-spec
The compare-openapi-producer-spec target will obtain the latest version of the
microservice producer OpenApi contract specification and compare it with the stored
version. After comparison, it will generate a report of the differences.
Producer specification file is configured via the openapi.spec build setting.
compare-openapi-specs
The compare-openapi-specs target will execute the compare-openapi-producer-spec
target, and then compare-openapi-consumer-spec target for each registered contract.
A summary report will be generated.
download-test-deps
The download-test-deps target installs cross-project test dependencies, including:
- github.com/axw/gocov/gocov
- github.com/AlekSi/gocov-xml
- github.com/stretchr/testify/assert
- github.com/stretchr/testify/mock
- github.com/stretchr/testify/http
- github.com/pmezard/go-difflib/difflib
- github.com/jstemmer/go-junit-report
execute-unit-tests
The execute-unit-tests target searches for testable directories (those containing *_test.go files),
and invokes their unit tests while collecting line coverage data. It then generates coverage reports
from the coverage data.
| Format | Output File |
|---|---|
| HTML | test/gocover.html |
| JUnit | test/junit-report.xml |
| Cobertura | test/cobertura-coverage.xml |
go-vet
The go-vet target executes go vet on directories which contain *.go files (excluding the vendor directory).
Options to pass to go vet can be specified in the build configuration under go.vet.options.
Publishing Build Targets
deploy-package
The deploy-package target will upload your already-built package tar-ball to a specified installer container.
The installer container ssh "host" must be properly configured in your ~/.ssh/config file, for example:
Host installer-tme-dmz-01
HostName rtp-dmz-bbhost.lab.ciscomsx.com
User root
Port 23556
IdentityFile ~/.ssh/installer-tme-dmz.key
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
In this example, the installer container is named "installer-tme-dmz-01". This name should be passed to
using the deploy.host configuration, for example:
go run build/cmd/build.go --config build/cmd/build.yml --deploy.host installer-tme-dmz-01 deploy-package
or
DEPLOY_HOST="installer-tme-dmz-01" make package-deploy
docker-push
The docker-push target will publish the local docker image generated using docker-build to the docker
repository specified in the current build configuration.
The repository can be specified using the docker.repository configuration setting.
docker-save
The docker-save target will output the local docker image generated using docker-build to a tar file
named ${info.app.name}.tar in the current directory. The tarred image will include the original repository and
image tag.
git-tag
The git-tag target re-creates and overwrites any local and remote tags for the current version ${release}-${build}.
This is commonly used after publish to tag the source repo with the build.
publish-binaries
The publish-binaries target will deploy any assemblies and other installer binaries to artifactory.
The remote repository folder is specified through artifactory.repository. Within the repository folder,
artifacts will be placed underneath ${msx.deploymentGroup}/${release}-${build}/ to isolate files from
each build and deployment group.
Binaries are specified in the artifactory configuration. Assembly publishing can be disabled
setting the artifactory.assemblies to false.
publish-installer-manifest
The publish-installer-manifest target executes Maven to deploy the manifest for the current build configuration.
publish-package
The publish-package target will use your local S3 client to upload the service package to S3. The correct S3 folder
will automatically be calculated. Your S3 client (aws s3 ...) should be properly configured with credentials either
using environment variables or configuration files.
publish-tool
The publish-tool target will upload the tool distribution packages (built with build-tool) to Artifactory.
Versioned and Latest will be published for easy URL distribution.
Checks
See the internal Checks documentation.
Contributing
-
Ensure you create a meaningfully named topic branch for your code:
feature/sql-transactionsbugfix/populate-error-handling -
Make your code changes
-
Run
make precommitto regenerate, reformat, license, etc. -
Commit your code to your topic branch
-
Rebase your topic branch onto master (do not reverse merge master into your branch)
-
Ensure your commits are cohesive, or just squash them
-
Create a Pull Request with a meaningful title similar to your topic branch name
Skel Execution Sequence
Program flow
cmd/skel/skel.gois skel's main function. It calls the run method from the skel package (inskel/), passing in a build number, which, in turn, is passed in during build vialdflagsfrombuild-tool'sBuildConfig(build-toolis inbuild/tool.go)- the skel package init loads skel templates into a map from the embedded filesystem
- it also sets up pre-run project and generation configuration loading routines that will be called before each command is run
- config is loaded into a package-level variable called skeletonConfig
- if skel does not find a pre-existing project in the current dir (by looking for
.skel.json) it diverts to the interactive menus - the menus are navigated in
skel/configure.gowhich simply fills in the values into skeletonConfig which is declared therein - skel then routes on via the
cli/package in order to enable cobra command processing (github.com/spf13/cobra) - (aside: the actual skel generation targets are defined and registered in
skel/skeleton.gowhich comprises the heart of skel) - each target/command is called using its string key after the machinery in skeleton.go adds common pre and post generator keys to the list of those to be run
- The sandwich filling between the pre and post slices of generators is derived from the archetype (see
skel/archetype.go) - in addition to pre and post generators, there are some that must be run explicitly by using cli commands (see groupings )
- once the complete list of generators to be run has been assembled, they are executed in order
- each target/command/generator will typically:
a. call out to OS level routines using the
gopkg.in/pipe.v2lib b. fill out substitutions and apply templates using the routines inrender.gowhich provides many options - fin
In general, skel simply creates new files, overwriting any that might exist before them; however, most of the targets do not overlap, and may be freely run in any order, or, in the case of Domain and AsyncAPI may be run mutiple times to build up additional variants.
Completion Scripts Oddity
The completion script target is an oddity in that it sends its output to
Domain generation
AsyncAPI generation
Generator groupings
(sp = service pack)
Pre generators
generate-skel-json Create the skel configuration file
generate-build Create makefile, build.go, build.yml
generate-app Readme, go.mod, main.go, 2 yml
generate-test internal/empty_test.go
Archetype specific
generate-migrate (beat,sp) Create the migrate package
generate-domain-beats (beat) Generate beats domain implementation
generate-service-pack (sp) Generate service pack implementation
generate-kubernetes (all) Create production kubernetes manifest templates
Post
generate-deployment-variables deployment_variables.yml
add-go-msx-dependency Adds go modules appropriate to the archetype
generate-local local&remote address, consul/vault
generate-manifest installer manifest (maven)
generate-dockerfile dockerfile: build and distn
generate-goland Create a Goland project for the application
generate-vscode Create a VSCode project for the application
generate-jenkins Create Jenkins CI templates
generate-github Create github configuration files
generate-git Create git repository
Called explicitly
completion autocompletion script for the specified shell
generate-certificate Generate an X.509 server certificate and private key
generate-channel Create async channel
generate-channel-asyncapi Create stream from AsyncApi 2.4 specification
generate-channel-publisher Create async channel publisher
generate-channel-subscriber Create async channel subscriber
generate-domain-openapi Create domains from OpenAPI 3.0 manifest
generate-domain-system Generate system domain implementation
generate-domain-tenant Generate tenant domain implementation
generate-timer Generate timer implementation
generate-topic-publisher Generate publisher topic implementation
generate-topic-subscriber Generate subscriber topic implementation
generate-webservices Create web services from swagger manifest
Using Skel template functions
General Structure
Skel uses targets, which are groups of associated generating actions. Typically, a target will perform substitutions on a set of template files, emit them to appropriate directories in the generated tree and may perform shell functions (e.g. git etc.) thereafter to complete the generated application.
Internally, targets are identified by unique strings which allow target lists to be manipulated easily. Each target generally also has a corresponding skel cli subcommand which will execute it.
File types
Skel can template any text file-type, but specifically recognizes the extensions for go, make, json, sql, yaml, groovy, properties, md, go-mod, docker, shell, js, ts and jenkins files.
Substitution
Skel does substitutions into template files in three phases:
- It substitutes particular
Stringsverbatim; these are passed into the rednering functions via the RenderOptions struct - It then substitutes variable values for the text in the templates matching
${variable}. e.g. application name would be substituted for${app.name}. The possible variables are listed in skel/render.go around line 95. - It then evaluates conditional blocks (see below)
For example, this piece of dockerfile:
FROM ${BASE_IMAGE}
EXPOSE ${server.port}
EXPOSE ${debug.port}
ENV SERVICE_BIN "/usr/bin/${app.name}"
COPY --from=debug-builder /app/dist/root/ /
COPY --from=debug-builder /go/bin/dlv /usr/bin
An easier option than pawing around in the source code: available substitutions and conditionals may be listed by executing a skel generation with the debug or trace loglevels: skel -l=DEBUG -- they will be printed as part of the render options log lines.
Conditional Blocks
These are defined by conditional markers and the words if, else and endif. Conditional markers vary by file type:
- make, yaml, properties, docker, bash:
# - sql:
--# - xml, md:
<--#, --> - everything else:
//#
For example, the following block in a Makefile includes different lists depending on the app archetype:
#if GENERATOR_APP
all: clean deps vet test docker assemblies deployment manifest
#endif GENERATOR_APP
#if GENERATOR_BEAT
all: clean deps vet test docker deployment manifest package
#endif GENERATOR_BEAT
#if GENERATOR_SP
all: clean deps vet test docker assemblies deployment manifest package
#endif GENERATOR_SP
File Operations
As a cross-check mechanism, Skel is provided with the ability to insist whether files already exist or not during generation, and to halt if something is unexpected. The options are:
- Add: either add the file or replace it, we care not
- New: must not exist before, halt if it does
- AddNoOverwrite: May add it, or skip it, but don't halt
- Replace: must exist and we replace it
- Delete: must exist (or we halt) then we delete it
- Gone: delete it if it exists, don't halt
File Names
Each template file has a source filename, which is in the embedded filesystem of templates, and a destination filename, which is relative to the root of the generated project.
Variables may be substituted into filenames, of either type, using the same syntax as within templates. e.g. "local/${app.name}.remote.yml"
If a dest file name is not provided, it is assumed to the same as the source.