Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrongly approximated hessians in the scipy interface #174

Open
jhelgert opened this issue Jan 4, 2023 · 1 comment
Open

Wrongly approximated hessians in the scipy interface #174

jhelgert opened this issue Jan 4, 2023 · 1 comment

Comments

@jhelgert
Copy link
Contributor

jhelgert commented Jan 4, 2023

The scipy interface automatically uses Ipopt with a limited-memory quasi-Newton approximation once we don't pass any hessians, see here.

However, as the following example demonstrates, there's something going wrong with the scipy interface:

import numpy as np
from cyipopt import minimize_ipopt
from scipy.optimize import rosen, rosen_der

# min rosen(x) s.t. 10 - x_2^2 - x_3 <= 0, 100 - x_5^2 <= 0

def con(x):
    return np.array([ 10 -x[1]**2 - x[2], 100.0 - x[4]**2 ])


def con_jac(x):
    return np.array([
        [0, -2*x[1], -1.0,  0,       0],
        [0,       0,    0,  0, -2*x[4]]
    ])


x0 = 1.25*np.ones(5)
con = {'type': 'ineq', 'fun': con, 'jac': con_jac}
res = minimize_ipopt(rosen, jac=rosen_der, x0=x0, constraints=con)
print(res)

Ipopt can't find a local minimizer for this simple problem and reaches the iteration limit. If we solve the same optimization problem via the Problem interface, it finds the local minimizer in less than 30 iterations:

import numpy as np
import cyipopt

class Problem:
    def objective(self, x):
        return rosen(x)

    def gradient(self, x):
        return rosen_der(x)

    def constraints(self, x):
        return np.array([ 10 -x[1]**2 - x[2], 100.0 - x[4]**2 ])

    def jacobian(self, x):
        return con_jac(x)

lb = [-np.inf, -np.inf, -np.inf, -np.inf, -np.inf]
ub = [np.inf, np.inf, np.inf, np.inf, np.inf]

cl = [-np.inf, -np.inf]
cu = [0, 0]

x0 = 1.25*np.ones(5)

nlp = cyipopt.Problem(
    n=len(x0), 
    m=len(cl), 
    problem_obj=Problem(), 
    lb=lb, 
    ub=ub, 
    cl=cl, 
    cu=cu
)

nlp.add_option("hessian_approximation", "limited-memory")
nlp.add_option("mu_strategy", "adaptive")

x, info = nlp.solve(x0)

The Hessians are probably not correctly approximated under the hood in the first example. I guess there's something broken in the Cython wrapper as the Cython wrapper only checks if the passed problem instance has a hessian() method. I'll try to have a look at it and provide a PR in the next few days.

PS: This bug is not related to #170 .

@chrhansk
Copy link
Contributor

@jhelgert I am afraid you made an error in your formulation:

  • Constraints with type=ineq in the scipy interface have a lower bound of 0 and an upper bound of np.inf
  • In the manual example, you set the lower bound to -np.inf and the upper bound to 0

Thus, you are solving two different problems, explaining the different results of the optimization (having nothing to do with any Hessians). If you flip the signs of the return values of the con and con_jac functions in the first example, it solves just as well as the problem in the second one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants