Distribution Based MIL Pooling Filters are Superior to Point Estimate Based Counterparts

1 Jan 2021  ·  Mustafa Umit Oner, Jared Marc Song, Hwee Kuan Lee, Wing-Kin Sung ·

Multiple instance learning (MIL) is a machine learning paradigm which learns the mapping between bags of instances and bag labels. There are different MIL tasks which can be solved by different MIL methods. One common component of all MIL methods is the MIL pooling filter. Here, we recommend and discuss a grouping scheme for MIL pooling filters: point estimate based pooling filters and distribution based pooling filters. The point estimate based pooling filters include the standard pooling filters, such as ‘max’, ‘mean’ and ‘attention’ pooling. The distribution based pooling filters include recently proposed ‘distribution’ pooling and newly designed ‘distribution with attention’ pooling. In this paper, we perform the first systematic analysis of different pooling filters. We theoretically showed that the distribution based pooling filters are superior to the point estimate based counterparts. Then, we empirically study the performance of the 5 pooling filters, namely ‘max’, ‘mean’, ‘attention’, ‘distribution’ and ‘distribution with attention’, on distinct real world MIL tasks. We showed that the performance of different pooling filters are different for different MIL tasks. Moreover, consistent with our theoretical analysis, models with distribution based pooling filters almost always performed equal or better than that with point estimate based pooling filters.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here