Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore] Spelling receiver/s #37610

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion receiver/saphanareceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,5 +99,5 @@ with detailed sample configurations [here](./testdata/config.yaml).

Details about the metrics produced by this receiver can be found in [metadata.yaml](./metadata.yaml). Further details of the monitoring queries used to collect them may be found in [queries.go](./queries.go).

> If all of the metrics collected by a given monitoring query are marked as `enabled: false` in the receiver configration, the monitoring query will not be executed.
> If all of the metrics collected by a given monitoring query are marked as `enabled: false` in the receiver configuration, the monitoring query will not be executed.

2 changes: 1 addition & 1 deletion receiver/saphanareceiver/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ func TestBasicConnectAndClose(t *testing.T) {

func TestFailedPing(t *testing.T) {
dbWrapper := &testDBWrapper{}
dbWrapper.On("PingContext").Return(errors.New("Coult not ping host"))
dbWrapper.On("PingContext").Return(errors.New("Could not ping host"))
dbWrapper.On("Close").Return(nil)

factory := &testConnectionFactory{dbWrapper}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,13 +83,13 @@ func (r *Receiver) CollectInSync(ctx context.Context, segments *agent.SegmentCol
for _, segment := range segments.Segments {
marshaledSegment, err := proto.Marshal(segment)
if err != nil {
fmt.Printf("cannot marshal segemnt from sync, %v", err)
fmt.Printf("cannot marshal segment from sync, %v", err)
}
err = consumeTraces(ctx, segment, r.nextConsumer)
if err != nil {
fmt.Printf("cannot consume traces, %v", err)
}
fmt.Printf("receivec data:%s", marshaledSegment)
fmt.Printf("received data:%s", marshaledSegment)
}
return &common.Commands{}, nil
}
Expand Down
2 changes: 1 addition & 1 deletion receiver/skywalkingreceiver/skywalking_receiver.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ type configuration struct {
CollectorGRPCServerSettings configgrpc.ServerConfig
}

// Receiver type is used to receive spans that were originally intended to be sent to Skywaking.
// Receiver type is used to receive spans that were originally intended to be sent to Skywalking.
// This receiver is basically a Skywalking collector.
type swReceiver struct {
config *configuration
Expand Down
4 changes: 2 additions & 2 deletions receiver/snmpreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,14 +126,14 @@ Attribute configurations are used to define what resource attributes will be use
| -- | -- | -- | -- |
| `oid` | The SNMP scalar OID value to grab data from (must end in .0). | string | |
| `resource_attributes` | The names of the related resource attribute configurations, allowing scalar oid metrics to be added to resources that have one or more scalar oid resource attributes. Cannot have indexed resource attributes as values. | string[] | |
| `attributes` | The names of the related attribute enum configurations as well as the values to attach to this returned SNMP scalar data. This can be used to have a metric config with multiple ScalarOIDs as different datapoints with different attributue values within the same metric | Attribute | |
| `attributes` | The names of the related attribute enum configurations as well as the values to attach to this returned SNMP scalar data. This can be used to have a metric config with multiple ScalarOIDs as different datapoints with different attribute values within the same metric | Attribute | |

#### ColumnOID Configuration

| Field Name | Description | Value | Default |
| -- | -- | -- | -- |
| `oid` | The SNMP scalar OID value to grab data from (must end in .0). | string | |
| `attributes` | The names of the related attribute configurations as well as the enum values to attach to this returned SNMP indexed data if the attribute configuration has enum data. This can be used to attach a specific metric SNMP column OID to an attribute. In doing so, multiple datapoints for a single metric will be created for each returned SNMP indexed data value for the metric along with different attribute values to differentiate them. This also can be used to have a metric config with multiple ColumnOIDs as different datapoints with different attributue values within the same metric | Attribute[] | |
| `attributes` | The names of the related attribute configurations as well as the enum values to attach to this returned SNMP indexed data if the attribute configuration has enum data. This can be used to attach a specific metric SNMP column OID to an attribute. In doing so, multiple datapoints for a single metric will be created for each returned SNMP indexed data value for the metric along with different attribute values to differentiate them. This also can be used to have a metric config with multiple ColumnOIDs as different datapoints with different attribute values within the same metric | Attribute[] | |
| `resource_attributes` | The names of the related resource attribute configurations. This is used to attach a specific metric SNMP column OID to a resource attribute. In doing so, multiple resources will be created for each returned SNMP indexed data value for the metric | string[] | |

#### Attribute
Expand Down
6 changes: 3 additions & 3 deletions receiver/snmpreceiver/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ type AttributeConfig struct {
// This contains a list of possible values that can be associated with this attribute
Enum []string `mapstructure:"enum"`
// OID is required only if Enum and IndexedValuePrefix are not defined.
// This is the column OID which will provide indexed values to be uased for this attribute (alongside a metric with ColumnOIDs)
// This is the column OID which will provide indexed values to be used for this attribute (alongside a metric with ColumnOIDs)
OID string `mapstructure:"oid"`
// IndexedValuePrefix is required only if Enum and OID are not defined.
// This is used alongside metrics with ColumnOIDs to assign attribute values using this prefix + the OID index of the metric value
Expand All @@ -190,7 +190,7 @@ type MetricConfig struct {

// GaugeMetric contains info about the value of the gauge metric
type GaugeMetric struct {
// ValueType is required can can be either int or double
// ValueType is required and can be either int or double
ValueType string `mapstructure:"value_type"`
}

Expand All @@ -200,7 +200,7 @@ type SumMetric struct {
Aggregation string `mapstructure:"aggregation"`
// Monotonic is required and can be true or false
Monotonic bool `mapstructure:"monotonic"`
// ValueType is required can can be either int or double
// ValueType is required and can be either int or double
ValueType string `mapstructure:"value_type"`
}

Expand Down
2 changes: 1 addition & 1 deletion receiver/snowflakereceiver/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ func TestMetricQueries(t *testing.T) {
{
desc: "FetchSessionMetrics",
query: sessionMetricsQuery,
columns: []string{"username", "disctinct_id"},
columns: []string{"username", "distinct_id"},
params: []driver.Value{"t", 3.0},
expect: sessionMetric{
userName: sql.NullString{
Expand Down
6 changes: 3 additions & 3 deletions receiver/snowflakereceiver/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ Reported total credits used in the cloud service over the last 24 hour window.

| Name | Description | Values |
| ---- | ----------- | ------ |
| service_type | Service type associateed with metric query. | Any Str |
| service_type | Service type associated with metric query. | Any Str |

### snowflake.billing.total_credit.total

Expand All @@ -320,7 +320,7 @@ Reported total credits used across account over the last 24 hour window.

| Name | Description | Values |
| ---- | ----------- | ------ |
| service_type | Service type associateed with metric query. | Any Str |
| service_type | Service type associated with metric query. | Any Str |

### snowflake.billing.virtual_warehouse.total

Expand All @@ -334,7 +334,7 @@ Reported total credits used by virtual warehouse service over the last 24 hour w

| Name | Description | Values |
| ---- | ----------- | ------ |
| service_type | Service type associateed with metric query. | Any Str |
| service_type | Service type associated with metric query. | Any Str |

### snowflake.billing.warehouse.cloud_service.total

Expand Down
2 changes: 1 addition & 1 deletion receiver/snowflakereceiver/factory_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import (
"github.com/open-telemetry/opentelemetry-collector-contrib/receiver/snowflakereceiver/internal/metadata"
)

func TestFacoryCreate(t *testing.T) {
func TestFactoryCreate(t *testing.T) {
factory := NewFactory()
require.EqualValues(t, metadata.Type, factory.Type())
}
Expand Down
2 changes: 1 addition & 1 deletion receiver/snowflakereceiver/metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ resource_attributes:

attributes:
service_type:
description: Service type associateed with metric query.
description: Service type associated with metric query.
type: string
error_message:
description: Error message reported by query if present.
Expand Down
6 changes: 3 additions & 3 deletions receiver/snowflakereceiver/scraper_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ func TestScraper(t *testing.T) {
cfg.Warehouse = "warehouse"
err := component.ValidateConfig(cfg)
if err != nil {
t.Fatal("an error ocured when validating config", err)
t.Fatal("an error occurred when validating config", err)
}

db, mock, err := sqlmock.New(sqlmock.QueryMatcherOption(sqlmock.QueryMatcherEqual))
if err != nil {
t.Fatal("an error ocured when opening mock db", err)
t.Fatal("an error occurred when opening mock db", err)
}
defer db.Close()

Expand Down Expand Up @@ -124,7 +124,7 @@ func (m *mockDB) initMockDB() {
},
{
query: sessionMetricsQuery,
columns: []string{"username", "disctinct_id"},
columns: []string{"username", "distinct_id"},
params: []driver.Value{"t", 3.0},
},
{
Expand Down
6 changes: 3 additions & 3 deletions receiver/solacereceiver/factory.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ import (
)

const (
// default value for max unaked messages
defaultMaxUnaked int32 = 1000
// default value for max unacked messages
defaultMaxUnacked int32 = 1000
// default value for host
defaultHost string = "localhost:5671"
)
Expand All @@ -36,7 +36,7 @@ func NewFactory() receiver.Factory {
func createDefaultConfig() component.Config {
return &Config{
Broker: []string{defaultHost},
MaxUnacked: defaultMaxUnaked,
MaxUnacked: defaultMaxUnacked,
Auth: Authentication{},
TLS: configtls.ClientConfig{
InsecureSkipVerify: false,
Expand Down
2 changes: 1 addition & 1 deletion receiver/solacereceiver/messaging_service.go
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ func newAMQPMessagingServiceFactory(cfg *Config, logger *zap.Logger) (messagingS
}

type amqpConnectConfig struct {
// conenct config
// connect config
addr string
saslConfig amqp.SASLType
tlsConfig *tls.Config
Expand Down
4 changes: 2 additions & 2 deletions receiver/solacereceiver/messaging_service_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,7 @@ func TestAMQPSubstituteVariables(t *testing.T) {

// testFunctionEquality will check that the pointer names are the same for the two functions.
// It is not a perfect comparison but will perform well differentiating between anonymous
// functions and the amqp named functinos
// functions and the amqp named functions
func testFunctionEquality(t *testing.T, f1, f2 any) {
assert.Equal(t, (f1 == nil), (f2 == nil))
if f1 == nil {
Expand Down Expand Up @@ -654,7 +654,7 @@ func (c *connMock) Read(b []byte) (n int, err error) {
d := <-c.nextData
// the way this test fixture is designed, there is a race condition
// between write and read where data may be written to nextData on
// a call to Write and may be propogated prior to the return of Write.
// a call to Write and may be propagated prior to the return of Write.
time.Sleep(10 * time.Millisecond)
c.remaining = bytes.NewReader(d)
}
Expand Down
4 changes: 2 additions & 2 deletions receiver/solacereceiver/receiver.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ const (
)

const (
brokerComponenteNameAttr = "receiver_name"
brokerComponentNameAttr = "receiver_name"
)

// solaceTracesReceiver uses azure AMQP to consume and handle telemetry data from SOlace. Implements receiver.Traces
Expand Down Expand Up @@ -88,7 +88,7 @@ func newTracesReceiver(config *Config, set receiver.Settings, nextConsumer consu
receiverName = "solace"
}
solaceBrokerAttrs := attribute.NewSet(
attribute.String(brokerComponenteNameAttr, receiverName),
attribute.String(brokerComponentNameAttr, receiverName),
)

unmarshaller := newTracesUnmarshaller(set.Logger, telemetryBuilder, solaceBrokerAttrs)
Expand Down
2 changes: 1 addition & 1 deletion receiver/solacereceiver/unmarshaller.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ type tracesUnmarshaller interface {
unmarshal(message *inboundMessage) (ptrace.Traces, error)
}

// newUnmarshalleer returns a new unmarshaller ready for message unmarshalling
// newTracesUnmarshaller returns a new unmarshaller ready for message unmarshalling
func newTracesUnmarshaller(logger *zap.Logger, telemetryBuilder *metadata.TelemetryBuilder, metricAttrs attribute.Set) tracesUnmarshaller {
return &solaceTracesUnmarshaller{
logger: logger,
Expand Down
2 changes: 1 addition & 1 deletion receiver/solacereceiver/unmarshaller_egress.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import (
type brokerTraceEgressUnmarshallerV1 struct {
logger *zap.Logger
telemetryBuilder *metadata.TelemetryBuilder
metricAttrs attribute.Set // othere Otel attributes (to add to the metrics)
metricAttrs attribute.Set // other Otel attributes (to add to the metrics)
}

// unmarshal implements tracesUnmarshaller.unmarshal
Expand Down
2 changes: 1 addition & 1 deletion receiver/solacereceiver/unmarshaller_move.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import (
type brokerTraceMoveUnmarshallerV1 struct {
logger *zap.Logger
telemetryBuilder *metadata.TelemetryBuilder
metricAttrs attribute.Set // othere Otel attributes (to add to the metrics)
metricAttrs attribute.Set // other Otel attributes (to add to the metrics)
}

// unmarshal implements tracesUnmarshaller.unmarshal
Expand Down
4 changes: 2 additions & 2 deletions receiver/solacereceiver/unmarshaller_receive.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ func (u *brokerTraceReceiveUnmarshallerV1) mapClientSpanAttributes(spanData *rec
attrMap.PutInt(droppedEnqueueEventsSuccessAttrKey, int64(spanData.DroppedEnqueueEventsSuccess))
attrMap.PutInt(droppedEnqueueEventsFailedAttrKey, int64(spanData.DroppedEnqueueEventsFailed))

// The IPs are now optional meaning we will not incluude them if they are zero length
// The IPs are now optional meaning we will not include them if they are zero length
hostIPLen := len(spanData.HostIp)
if hostIPLen == 4 || hostIPLen == 16 {
attrMap.PutStr(hostIPAttrKey, net.IP(spanData.HostIp).String())
Expand Down Expand Up @@ -372,7 +372,7 @@ func (u *brokerTraceReceiveUnmarshallerV1) unmarshalBaggage(toMap pcommon.Map, b
return nil
}

// insertUserProperty will instert a user property value with the given key to an attribute if possible.
// insertUserProperty will insert a user property value with the given key to an attribute if possible.
// Since AttributeMap only supports int64 integer types, uint64 data may be misrepresented.
func (u *brokerTraceReceiveUnmarshallerV1) insertUserProperty(toMap pcommon.Map, key string, value any) {
const (
Expand Down
2 changes: 1 addition & 1 deletion receiver/solacereceiver/unmarshaller_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ func TestSolaceMessageUnmarshallerUnmarshal(t *testing.T) {
err error
}{
{
name: "Unknown Topic Stirng",
name: "Unknown Topic String",
message: &inboundMessage{
Properties: &amqp.MessageProperties{
To: &invalidTopicString,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# This powershell script finds all SQLServer counter paths and dumps them to counters.txt.
# This should be run on a system with the a running SQLServer.
# This should be run on a system with a running SQLServer.
(Get-Counter -ListSet "SQLServer:*").Paths | Set-Content -Path "$PSScriptRoot\counters.txt"
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# This powershell script finds all SQLServer counter paths and dumps them to counters.txt for a named instance.
# This should be run on a system with the a running SQLServer and named instance.
# This should be run on a system with a running SQLServer and named instance.
# This example uses a named instance of TEST_NAME.
(Get-Counter -ListSet "MSSQL$*").Paths| Set-Content -Path "$PSScriptRoot\counters.txt"
2 changes: 1 addition & 1 deletion receiver/sshcheckreceiver/scraper.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ func (s *sshcheckScraper) scrapeSFTP(now pcommon.Timestamp) error {
return err
}

// timeout chooses the shorter between between a given deadline and timeout
// timeout chooses the shorter duration between a given deadline and timeout
func timeout(deadline time.Time, timeout time.Duration) time.Duration {
timeToDeadline := time.Until(deadline)
if timeToDeadline < timeout {
Expand Down
2 changes: 1 addition & 1 deletion receiver/syslogreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Parses Syslogs received over TCP or UDP.
| `udp` | `nil` | Defined udp_input operator. (see the UDP configuration section) |
| `protocol` | required | The protocol to parse the syslog messages as. Options are `rfc3164` and `rfc5424` |
| `location` | `UTC` | The geographic location (timezone) to use when parsing the timestamp (Syslog RFC 3164 only). The available locations depend on the local IANA Time Zone database. [This page](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) contains many examples, such as `America/New_York`. |
| `enable_octet_counting` | `false` | Wether or not to enable [RFC 6587](https://www.rfc-editor.org/rfc/rfc6587#section-3.4.1) Octet Counting on syslog parsing (Syslog RFC 5424 and TCP only). |
| `enable_octet_counting` | `false` | Whether or not to enable [RFC 6587](https://www.rfc-editor.org/rfc/rfc6587#section-3.4.1) Octet Counting on syslog parsing (Syslog RFC 5424 and TCP only). |
| `max_octets` | `8192` | The maximum octets for messages using [RFC 6587](https://www.rfc-editor.org/rfc/rfc6587#section-3.4.1) Octet Counting on syslog parsing (Syslog RFC 5424 and TCP only). |
| `allow_skip_pri_header` | `false` | Allow parsing records without the PRI header. If this setting is enabled, messages without the PRI header will be successfully parsed. The `SeverityNumber` and `SeverityText` fields as well as the `priority` and `facility` attributes will not be set on the log record. If this setting is disabled (the default), messages without PRI header will throw an exception. To set this setting to `true`, the `enable_octet_counting` setting must be `false`. |
| `non_transparent_framing_trailer` | `nil` | The framing trailer, either `LF` or `NUL`, when using [RFC 6587](https://www.rfc-editor.org/rfc/rfc6587#section-3.4.2) Non-Transparent-Framing (Syslog RFC 5424 and TCP only). |
Expand Down