Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible l1_cache latency bug? #140

Open
springrain12 opened this issue Aug 4, 2019 · 1 comment
Open

Possible l1_cache latency bug? #140

springrain12 opened this issue Aug 4, 2019 · 1 comment

Comments

@springrain12
Copy link

When I change the l1_cache_latency in gpgpusim.config file, total number of instruction changes (i.e., gpu_tot_sim_insn).

I used simulator from dev branch compiled with CUDA 9.1, G++ 5.4.0, Ubuntu 16.04, and used cudaTensorCoreGemm application of NVIDIA SDK 9.1.

Is that correct result?

@SerinaTan
Copy link

I saw a similar behavior on my Tensorcore workloads. I made a pull request #142 which might also fix your issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants