Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For a wired congestion case #8

Closed
tigereatsheep opened this issue Apr 23, 2024 · 8 comments
Closed

For a wired congestion case #8

tigereatsheep opened this issue Apr 23, 2024 · 8 comments

Comments

@tigereatsheep
Copy link

s.t. cuhk-eda/ripple#12
Hello Li, reopen the issue here.
今天下载了你们工作的四篇论文,需要花一些时间看会儿论文再调试xplace。非常感谢!

@tigereatsheep
Copy link
Author

Hello,Li
调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题:
1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量;
2.如何给大网络来的WA梯度加权?

@tigereatsheep
Copy link
Author

tigereatsheep commented Apr 24, 2024

还有个小问题是我把 --use_precond 关掉会报错:
Traceback (most recent call last):
File "/home/tigereatsheep/workspace/Xplace/main.py", line 104, in
main()
File "/home/tigereatsheep/workspace/Xplace/main.py", line 100, in main
run_placement_main(args, logger)
File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 41, in run_placement_main
run_placement_single(args, logger)
File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 10, in run_placement_single
res = run_placement_main_nesterov(args, logger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tigereatsheep/workspace/Xplace/src/run_placement_nesterov.py", line 109, in run_placement_main_nesterov
init_lr = estimate_initial_learning_rate(obj_and_grad_fn, trunc_node_pos_fn, mov_node_pos, args.lr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tigereatsheep/workspace/Xplace/src/initializer.py", line 147, in estimate_initial_learning_rate
x_k_1 = (constraint_fn(x_k - lr * g_k)).clone().detach().requires_grad_(True)
~~~^~~~~
TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'

波浪线是在 lr * g_k 这个地方

@liulixinkerry
Copy link
Member

还有个小问题是我把 --use_precond 关掉会报错: Traceback (most recent call last): File "/home/tigereatsheep/workspace/Xplace/main.py", line 104, in main() File "/home/tigereatsheep/workspace/Xplace/main.py", line 100, in main run_placement_main(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 41, in run_placement_main run_placement_single(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 10, in run_placement_single res = run_placement_main_nesterov(args, logger) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/run_placement_nesterov.py", line 109, in run_placement_main_nesterov init_lr = estimate_initial_learning_rate(obj_and_grad_fn, trunc_node_pos_fn, mov_node_pos, args.lr) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/initializer.py", line 147, in estimate_initial_learning_rate x_k_1 = (constraint_fn(x_k - lr * g_k)).clone().detach().requires_grad_(True) ~~~^~~~~ TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'

波浪线是在 lr * g_k 这个地方

Enabling --use_precond would be better. This parameter is highly correlated with the solution quality.

@tigereatsheep
Copy link
Author

还有个小问题是我把 --use_precond 关掉会报错: Traceback (most recent call last): File "/home/tigereatsheep/workspace/Xplace/main.py", line 104, in main() File "/home/tigereatsheep/workspace/Xplace/main.py", line 100, in main run_placement_main(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 41, in run_placement_main run_placement_single(args, logger) File "/home/tigereatsheep/workspace/Xplace/src/run_placement.py", line 10, in run_placement_single res = run_placement_main_nesterov(args, logger) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/run_placement_nesterov.py", line 109, in run_placement_main_nesterov init_lr = estimate_initial_learning_rate(obj_and_grad_fn, trunc_node_pos_fn, mov_node_pos, args.lr) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tigereatsheep/workspace/Xplace/src/initializer.py", line 147, in estimate_initial_learning_rate x_k_1 = (constraint_fn(x_k - lr * g_k)).clone().detach().requires_grad_(True) ~~~^~~~~ TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
波浪线是在 lr * g_k 这个地方

Enabling --use_precond would be better. This parameter is highly correlated with the solution quality.

all right, I'm worried that in some special cases, this may disrupt the balance between hpwl and overflow.

@liulixinkerry
Copy link
Member

liulixinkerry commented Apr 24, 2024

Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?

  1. In our default main branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the neural branch. But the neural branch does not support routability optimization. As you want to optimize the routability, I suggest using the main branch.
  2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large --ignore_net_degree and (2) adding your custom net weight in the pin grad. You can try to modify the code in folder to pass the net weight parameter to the cuda function. Basically, the net_weight is float tensor and its tensor size and indexing can follow the net_mask. Please feel free to contact me if you have any questions about modifying the code.

@tigereatsheep
Copy link
Author

Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?

1. In our default `main` branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the `neural` branch. But the `neural` branch does not support routability optimization. As you want to optimize the routability, I suggest using the `main` branch.

2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large [--ignore_net_degree](https://github.com/cuhk-eda/Xplace/blob/main/main.py#L32) and (2) adding your custom net weight in the [pin grad](https://github.com/cuhk-eda/Xplace/blob/main/cpp_to_py/wa_wirelength_hpwl_cuda/wa_wirelength_hpwl_cuda_kernel.cu#L269-L270). You can try to modify the code in [folder](https://github.com/cuhk-eda/Xplace/tree/main/cpp_to_py/wa_wirelength_hpwl_cuda) to pass the net weight parameter to the cuda function. Basically, the `net_weight` is `float` tensor and its tensor size and indexing can follow the `net_mask`. Please feel free to contact me if you have any questions about modifying the code.

Hello, Li. I've implemented large nets weighting in file wa_wirelength_hpwl_cuda_kernel.cu
The congestion down from 10% to 1%.
I think the WA function pay more attention on small net (less on large net) due to the logsum denominator.
截图 2024-04-25 16-15-54
感觉讲的清楚的话发个小论文挺好的

@liulixinkerry
Copy link
Member

liulixinkerry commented Apr 25, 2024

Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?

1. In our default `main` branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the `neural` branch. But the `neural` branch does not support routability optimization. As you want to optimize the routability, I suggest using the `main` branch.

2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large [--ignore_net_degree](https://github.com/cuhk-eda/Xplace/blob/main/main.py#L32) and (2) adding your custom net weight in the [pin grad](https://github.com/cuhk-eda/Xplace/blob/main/cpp_to_py/wa_wirelength_hpwl_cuda/wa_wirelength_hpwl_cuda_kernel.cu#L269-L270). You can try to modify the code in [folder](https://github.com/cuhk-eda/Xplace/tree/main/cpp_to_py/wa_wirelength_hpwl_cuda) to pass the net weight parameter to the cuda function. Basically, the `net_weight` is `float` tensor and its tensor size and indexing can follow the `net_mask`. Please feel free to contact me if you have any questions about modifying the code.

Hello, Li. I've implemented large nets weighting in file wa_wirelength_hpwl_cuda_kernel.cu The congestion down from 10% to 1%. I think the WA function pay more attention on small net (less on large net) due to the logsum denominator. 截图 2024-04-25 16-15-54 感觉讲的清楚的话发个小论文挺好的

Glad to hear that.

@tigereatsheep
Copy link
Author

Hello,Li 调了一波参数拥塞从30%降到10%了,现在有一个问题是最终的overlap(15%最低)降不下去,观察看是局部太塞了。想请教你两个关于代码的问题: 1.论文里提到的给FFT逐步加滤波的是在哪里改,我想提升一下最终的高频分量; 2.如何给大网络来的WA梯度加权?

1. In our default `main` branch, the NN-assisted gradient is not enabled. If you wish to modify the high-frequency component, you will need to switch to the `neural` branch. But the `neural` branch does not support routability optimization. As you want to optimize the routability, I suggest using the `main` branch.

2. Currently, we do not have an API to adjust the net weight for increasing the weight of a high-pin net. However, you can manually make changes by (1) setting a large [--ignore_net_degree](https://github.com/cuhk-eda/Xplace/blob/main/main.py#L32) and (2) adding your custom net weight in the [pin grad](https://github.com/cuhk-eda/Xplace/blob/main/cpp_to_py/wa_wirelength_hpwl_cuda/wa_wirelength_hpwl_cuda_kernel.cu#L269-L270). You can try to modify the code in [folder](https://github.com/cuhk-eda/Xplace/tree/main/cpp_to_py/wa_wirelength_hpwl_cuda) to pass the net weight parameter to the cuda function. Basically, the `net_weight` is `float` tensor and its tensor size and indexing can follow the `net_mask`. Please feel free to contact me if you have any questions about modifying the code.

Hello, Li. I've implemented large nets weighting in file wa_wirelength_hpwl_cuda_kernel.cu The congestion down from 10% to 1%. I think the WA function pay more attention on small net (less on large net) due to the logsum denominator. 截图 2024-04-25 16-15-54 感觉讲的清楚的话发个小论文挺好的

Glad to hear that.

图片
非常感谢你的帮助!补个图关issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants