22 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Achieving fusion of deep learning with combinatorial algorithms promises transformative changes to artificial intelligence.

Local search is one of the simplest families of algorithms in combinatorial optimization, yet it yields strong approximation guarantees for canonical NP-Complete problems such as the traveling salesman problem and vertex cover.

Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.

Furthermore, to approximate solutions to constrained combinatorial optimization problems such as the TSP with time windows, we train hierarchical GPNs (HGPNs) using RL, which learns a hierarchical policy to find an optimal city permutation under constraints.

Ranked #1 on Traveling Salesman Problem on TSPLIB

In this work, we propose a general and hybrid approach, based on DRL and CP, for solving combinatorial optimization problems.

The Traveling Salesman Problem (TSP) is the most popular and most studied combinatorial problem, starting with von Neumann in 1951.

For the traveling salesman problem (TSP), the existing supervised learning based algorithms suffer seriously from the lack of generalization ability.

This paper presents a powerful genetic algorithm(GA) to solve the traveling salesman problem (TSP).

We address the Traveling Salesman Problem (TSP), a famous NP-hard combinatorial optimization problem.

We propose a policy gradient algorithm to learn a stochastic policy that selects 2-opt operations given a current solution.