Traveling Salesman Problem
79 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Traveling Salesman Problem models and implementationsMost implemented papers
Neural Combinatorial Optimization with Reinforcement Learning
Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes.
Differentiation of Blackbox Combinatorial Solvers
Achieving fusion of deep learning with combinatorial algorithms promises transformative changes to artificial intelligence.
Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model
EoH represents the ideas of heuristics in natural language, termed thoughts.
RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark
To fill this gap, we introduce RL4CO, a unified and extensive benchmark with in-depth library coverage of 23 state-of-the-art methods and more than 20 CO problems.
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers
We also obtain SOTA results on QM9, MOLPCBA, and LIT-PCBA molecular property prediction benchmarks via transfer learning.
Donkey and Smuggler Optimization Algorithm: A Collaborative Working Approach to Path Finding
These are the Smuggler and Donkeys.
Combinatorial Optimization by Graph Pointer Networks and Hierarchical Reinforcement Learning
Furthermore, to approximate solutions to constrained combinatorial optimization problems such as the TSP with time windows, we train hierarchical GPNs (HGPNs) using RL, which learns a hierarchical policy to find an optimal city permutation under constraints.
Exploring the Loss Landscape in Neural Architecture Search
In this work, we show that (1) the simplest hill-climbing algorithm is a powerful baseline for NAS, and (2), when the noise in popular NAS benchmark datasets is reduced to a minimum, hill-climbing to outperforms many popular state-of-the-art algorithms.
Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer
Moreover, the positional features are embedded through a novel cyclic positional encoding (CPE) method to allow Transformer to effectively capture the circularity and symmetry of VRP solutions (i. e., cyclic sequences).
Backpropagation through Combinatorial Algorithms: Identity with Projection Works
Embedding discrete solvers as differentiable layers has given modern deep learning architectures combinatorial expressivity and discrete reasoning capabilities.