Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancy between reported and benchmarked seconds in solve timing #692

Open
jameshowey opened this issue Feb 14, 2025 · 2 comments
Open

Comments

@jameshowey
Copy link

I am running cbc in a AWS lambda layer. lambda has a timeout of 900 seconds. I am requesting a timeout of 600 seconds on the command line:

/tmp/model.lp -sec 600 -tune 99000000 -strong 0 -sosP orderhigh -solve -solu /tmp/result.txt -quit

The process is terminated after 900 seconds, with a last reported seconds of 547 seconds

2025-02-14T00:41:00.562Z 9b5f6813-421b-4093-9e5b-eeb15a56e118 info Cbc0010I After 73300 nodes, 11244 on tree, 386796.69 best solution, best possible 1112224.4 (547.11 seconds)

REPORT RequestId: 9b5f6813-421b-4093-9e5b-eeb15a56e118 Duration: 900000.00 ms Billed Duration: 900000 ms Memory Size: 1792 MB Max Memory Used: 363 MB Status: timeout

This is a sample of the log output.

timeoutfail.txt

I abstracted timestamp and reported seconds from the sample and find a strong linear relationship. disregarding intercept:
timestamp = 1.58 * reported
reported = .633 * timestamp

Using this relationship, If I want to exit cbc before 900 seconds, I should -sec 569 or less. in my application there is some setup before cbc invoked so I will have to pad some.

What could account for these results? [Just spitballing but I look at that .633 ratio and think that's pretty close to 60/100]

R code and results follow:

data <- data.frame(
elapsed = c(25.63, 70.01, 116.23, 186.85, 282.18, 335.33, 388.09, 443.33, 503.73, 554.67,
609.13, 648.07, 665.61, 682.76, 705.87, 705.87, 722.43, 743.93, 743.93, 783.17,
783.17, 820.19, 858.27, 858.27),
reported = c(15.01, 43.13, 73.39, 118.22, 178.38, 212.67, 246.36, 281.34, 319.99, 352.72,
387.59, 413.18, 424.36, 435.37, 440.01, 450.08, 460.69, 462.84, 474.00, 476.99,
499.38, 522.93, 524.08, 547.11)
)

plot(data$reported, data$elapsed,
xlab = "Reported Seconds",
ylab = "Elapsed Timestamp Seconds",
main = "Reported Seconds vs. Elapsed Timestamp Seconds",
pch = 19, col = "blue")
grid() # add a grid for easier reading
fit <- lm(elapsed ~ reported, data = data)
print(coef(fit))
summary(fit)

print(coef(fit))
(Intercept) reported
-1.270217 1.587020

summary(fit)

Call:
lm(formula = elapsed ~ reported, data = data)

Residuals:
Min 1Q Median 3Q Max
-8.735 -6.945 -2.361 1.480 27.814

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.27022 4.98621 -0.255 0.801
reported 1.58702 0.01301 122.020 <2e-16 ***

Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 10.2 on 22 degrees of freedom
Multiple R-squared: 0.9985, Adjusted R-squared: 0.9985
F-statistic: 1.489e+04 on 1 and 22 DF, p-value: < 2.2e-16

@jameshowey
Copy link
Author

Ok, this may be something in the lambda environment. I am seeing the numbers agree on similar problems. I'll investigate further.

@jameshowey
Copy link
Author

I reran the identical workload and timestamp readings matched with cbc seconds, and cbc gracefully timed me out at the expected time. I've got nothing for a cbc repro. If I had $29 i'd give aws a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant