Skip to content
This repository has been archived by the owner on Jan 29, 2025. It is now read-only.

Steadily increasing memory usage #335

Open
ofosos opened this issue May 17, 2018 · 5 comments
Open

Steadily increasing memory usage #335

ofosos opened this issue May 17, 2018 · 5 comments

Comments

@ofosos
Copy link

ofosos commented May 17, 2018

Hi,
we're seeing a steadily increasing memory usage with biggraphite:

screenshot from 2018-05-17 18-21-37

This instance has been live since about a week and memory usage went from 2GiB initially to 3.6GiB in a matter of six days.

Can you help us out with any insights into what could be the culprit here?

Regards, Mark

@iksaif
Copy link
Contributor

iksaif commented May 17, 2018

Hello,

  • would you have the same graph with whisper only
  • can you give part of the configuration
  • is this a cache ? an aggregator-cache ?
  • could you add other carbon related metrics on the same graph ? (you can configure carbon to send metrics to itself and graph the size of cache and similar things).

We might have such leaks, but since we're using cgroups and redundant carbon instances we do not notice them.

Thanks,

@ofosos
Copy link
Author

ofosos commented May 17, 2018

We don't use whisper.
This is a carbon cache, which should not do aggregations.

The carbon metrics are enabled, I'll post them tomorrow.

What are you using as a frontend to the stack? Currently we have gostatsd, but we might add statsrelay.

@ofosos
Copy link
Author

ofosos commented May 18, 2018

Here are the Graphite metrics:

screenshot from 2018-05-18 09-26-21
screenshot from 2018-05-18 09-26-13
screenshot from 2018-05-18 09-26-06
screenshot from 2018-05-18 09-25-57

Arg, I just noticed that the gostatsd is still labeled Dogstatsd, I always confuse the names :(

@iksaif
Copy link
Contributor

iksaif commented May 18, 2018

Looks like the pointsPerUpdate is increasing. If you look at values in the .cache. directory you might also see it increase. That would explain the memory usage.

This can be caused by Cassandra becoming slightly slower to store the points. 13 points per update is pretty high (we tend to stay around 2) but it really depends on your base resolution.

I suggest your look at your Cassandra cluster health, and maybe tweak the table that holds the points for your base resolution.

@ofosos
Copy link
Author

ofosos commented May 18, 2018

Oh, this is the graph with 7 week data:

screenshot from 2018-05-18 09-46-42

I don't think it's too bad, but we still might investigate. I'll report, when I know more.

@adriengentil adriengentil removed their assignment Sep 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants