-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logstash is not reading data from Kinesis #20
Comments
hi @rishabhgupta0 , sorry that this question fell through the cracks. Is this still something you're trying to track down? If so, do you have any relevant errors or warnings in logstash's own logs? |
I have the same issue, and unfortunately the plugin itself provides no output. Quite frustrating. The problem looks like it may be due to restarting dockerised logstash and seeing that the UUID in ./data/uuid does not match the uuid in DynamoDB. It would be nice to have some extended debug output to know what the plugin is doing. |
If the UUID changes, then the KCL library will see it as a new worker, and the shard will become available to this new worker after the default timeout of 10 seconds has passed https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/KinesisClientLibConfiguration.java#L52 . So the UUID changing shouldn't be a problem for more than 10 seconds after startup. But yeah, definitely agree more logging would be useful. |
We are also having this issue, and I am not sure how to debug further. There is no output in the logs, and no warnings about our configuration. Other inputs are working fine. We are using logstash 5.5.0, and version 2.0.6 of this plugin. We are also running through docker (community supported docker image), if that matters. |
It would be good if we had an option to switch on extended KCL debug information. |
We were able to get this to work by opening up the permissions to Dynamo. We gave the logstash instance role all permissions to Dynamo, and its working. We're now in the process of trial and error to roll that back to the permissions it actually needs. So that is something to try if you are having this issue. Agreed, that there should be an option to turn on debug logs. However, I strongly feel that issues related to lack of permissions should have been at least logged as WARN, if not ERROR. I would favor ERROR as it is a blocking issue. Having no logs for this scenarios is unacceptable IMO. Once we know the exact permissions needed for Dynamo, we will open a PR to get those documented in the README. |
That's interesting, I would expect in that permissions situation that the KCL would throw an exception or at least log at WARN level or above, but either of those things would result in some Logstash output, so apparently it is not. Sounds like we need to dig further into this plugin's error handling and whether it needs to further configure the KCL as well. Unfortunately it might be a few weeks before I can do that digging myself, if anybody else can pick that up it'd be great. |
My dynamodb policy is :
Possibly the kinesis policy is causing problems. I'll do some more testing tomorrow. But similar to you I see no long output. |
We have narrowed the permissions down to:
|
Another option would be to give |
Here's the relevant Kinesis/KCL documentation that covers the necessary Kinesis and DynamoDB permissions: http://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-iam.html |
@bradseefeld the kinesis policy you provided doesn't work at all for me. But if I add a kinesis:* action I start to see data filtering into Kibana. The moment I remove the kinesis:* action, it stops. Are you sure that policy is working for you? The DynamoDB policy works seems fine. The odd thing is, the actions in your policy match those listed in the above Kinesis/KCL documentation that @codekitchen listed above. |
Thats odd! Yes, it is working for us... I pulled it from the IAM 'show policy' tool directly. |
OK, there's something else happening that I must not be accounting for. I had a situation where adding kinesis:* to the policy caused log data to flow, and removing it caused it to stop. I tried it a number of times and it was consistent. But now it is working fine with just these kinesis actions:
Frustrating, but at least I'm now seeing log data in Kibana! |
hi, can anyone can help? only have the following logs: |
Hi @liuyue-zenjoy we are having the same issue (have given logstash full cloudwatch/kinesis/dynamodb permissions, but it's not reading the stream) and the logstash log looks the same. |
Same here.... frustrating indeed.... |
Can you go into dynamo DB and find the checkout point data? Eg. is it
writing to dynamo db?
We're currently using this plugin without any issues.
…On Sat, 6 Feb 2021 at 01:44, Gil Brudner ***@***.***> wrote:
Same here.... frustrating indeed....
@yufuluo <https://github.com/yufuluo> have you found something?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#20 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAB2CY5JRT5MLMGYRQQZN3TS5SNHJANCNFSM4DAN2FFQ>
.
|
Hi @lgarvey , Yes, I do see the checkpoint in dynamo... :\ |
Had the same problem were spend some time to understand the reason. After upgrading |
I'm having similar issues but there might be slight differences: |
Hi
I am trying to read a kinesis stream using this plug-in. Logstash starts correctly and parses the configuration file but i do not see any output. below is my config file:
input {
kinesis {
kinesis_stream_name => ElasticPoc
type => kinesis
}
tcp {
port => 10000
type => tcp
}
}
filter {
if [type] == "kinesis" {
json {
source => "message"
}
}
if [type] == "tcp" {
grok {
match => { "message" => "Hello, %{WORD:name}"}
}
}
}
output{
if [type] == "kinesis"
{
elasticsearch{
hosts => "http://hostname:9200"
user => "elastic"
password => "changeme"
index => elasticpoc
}
}
if [type] == "tcp"
{
elasticsearch{
hosts => "http://hostname:9200"
user => "elastic"
password => "changeme"
index => elkpoc
}
}
}
The text was updated successfully, but these errors were encountered: