The key challenge for few-shot semantic segmentation (FSS) is how to tailor a desirable interaction among sup-port and query features and/or their prototypes, under the episodic training scenario. Most existing FSS methods im-plement such support/query interactions by solely leveraging plain operations - e.g., cosine similarity and feature concatenation - for segmenting the query objects. How-ever, these interaction approaches usually cannot well capture the intrinsic object details in the query images that are widely encountered in FSS, e.g., if the query object to be segmented has holes and slots, inaccurate segmentation al-most always happens. To this end, we propose a dynamic prototype convolution network (DPCN) to fully capture the aforementioned intrinsic details for accurate FSS. Specifi-cally, in DPCN, a dynamic convolution module (DCM) is firstly proposed to generate dynamic kernels from support foreground, then information interaction is achieved by con-volution operations over query features using these kernels. Moreover, we equip DPCN with a support activation mod-ule (SAM) and a feature filtering module (FFM) to generate pseudo mask and filter out background information for the query images, respectively. SAM and FFM together can mine enriched context information from the query features. Our DPCN is also flexible and efficient under the k-shot FSS setting. Extensive experiments on PASCAL-5i and COCO 20i show that DPCN yields superior performances under both 1-shot and 5-shot settings. © 2022 IEEE.
- Information filtering,
- Semantic Segmentation,
- Semantic Web,
- Semantics,
- And filters,
- Cosine similarity,
- Episodic trainings,
- Feature filtering,
- Information interaction,
- Query images,
- Query object,
- Segmentation methods,
- Semantic segmentation,
- Training scenario
Open access version, available at CVPR 2022 Open Access.
Open Access version provided by Computer Vision Foundation.
Archived and thanks to CVPR 2022 Open Access.
Uploaded: 15 Feb 2023