This paper presents our work in WMT 2021 Quality Estimation (QE) Shared Task.
This paper describes our work in participation of the IWSLT-2021 offline speech translation task.
Such reconfigurable design with these omni-adaptive fingers enables us to systematically investigate the optimal arrangement of the fingers towards robust grasping.
Robotic fingers made of soft material and compliant structures usually lead to superior adaptation when interacting with the unstructured physical environment.
We use soft, stuffed toys for training, instead of everyday objects, to reduce the integration complexity and computational burden and exploit such rigid-soft interaction by changing the gripper fingers to the soft ones when dealing with rigid, daily-life items such as the Yale-CMU-Berkeley (YCB) objects.
It is widely known that well-designed perturbations can cause state-of-the-art machine learning classifiers to mis-label an image, with sufficiently small perturbations that are imperceptible to the human eyes.
The qFool method can drastically reduce the number of queries compared to previous decision-based attacks while reaching the same quality of adversarial examples.
Moreover, secondary adversarial attacks cannot be directly performed to our method because our method is not based on a neural network but based on high-dimensional artificial features and FLD (Fisher Linear Discriminant) ensemble.
In this paper, we first propose the epsilon-neighborhood attack, which can fool the defensively distilled networks with 100% success rate in the white-box setting, and it is fast to generate adversarial examples with good visual quality.
Much of the data being created on the web contains interactions between users and items.