Our paper "Fidelity-Weighted Learning", with Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf, has been accepted at Sixth International Conference on Learning Representations (ICLR2018). \o/
tl;dr
Fidelity-weighted learning (FWL) is a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data. It modulates the parameter updates to a student network which trained on the task we care about, on a per-sample basis according to the posterior confidence of its label-quality estimated by a Bayesian teacher, who has access to a rather small amount of high-quality labels.
The success of deep neural networks to date depends strongly on the availability of labeled data which is costly and not always easy to obtain. Usually, it is much easier to obtain small quantities of high-quality labeled data and large quantities of unlabeled data. The problem of how to best integrate these two different sources of information during training is an active pursuit in the field of semi-supervised learning and here, with FWL, we propose an idea to address this question.
Learning from samples of variable quality
For a large class of tasks, it is also easy to define one or more so-called “weak annotators”, additional (albeit noisy) sources of weak supervision based on heuristics or “weaker”, biased classifiers trained on e.g. non-expert crowd-sourced data or data from different domains that are related. While easy and cheap to generate, it is not immediately clear if and how these additional weakly-labeled data can be used to train a stronger classifier for the task we care about. More generally, in almost all practical applications machine learning systems have to deal with data samples of variable quality. For example, in a large dataset of images only a small fraction of samples may be labeled by experts and the rest may be crowd-sourced using e.g. Amazon Mechanical Turk. In addition, in some applications, labels are intentionally perturbed due to privacy issues.
Assuming we can obtain a large set of weakly-labeled data in addition to a much smaller training set of “strong” labels, the simplest approach is to expand the training set by including the weakly-supervised samples (all samples are equal). Alternatively, one may pretrain on the weak data and then fine-tune on observations from the true function or distribution (which we call strong data). Indeed, a small amount of expert-labeled data can be augmented in such a way by a large set of raw data, with labels coming from a heuristic function, to train a more accurate neural ranking model. The downside is that such approaches are oblivious to the amount or source of noise in the labels.
All labels are equal, but some labels are more equal than others, just like animals.
Inspired by George, Animal Farm, 1945.
We argue that treating weakly-labeled samples uniformly (i.e. each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality. Instead, we propose Fidelity-Weighted Learning (FWL), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample. By directly modeling the inaccuracies introduced by the weak annotator in this way, we can control the extent to which we make use of this additional source of weak supervision: more for confidently-labeled weak samples close to the true observed data, and less for uncertain samples further away from the observed data.
How fidelity-weighted learning works?
We propose a setting consisting of two main modules:
- One is called the student and is in charge of learning a suitable data representation and performing the main prediction task,
- The other is the teacher which modulates the learning process by modeling the inaccuracies in the labels.

We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the weak annotator, and a small set of high-quality samples labeled by experts, called the strong dataset, consisting of tuples of training samples and their true labels
, i.e.
. We consider the latter to be observations from the true target function that we are trying to learn.
We use the weak annotator to generate labels for the unlabeled samples. Generated labels are noisy due to the limited accuracy of the weak annotator. This gives us the weak dataset consisting of tuples of training samples and their weak labels
, i.e.
. Note that we can generate a large amount of weak training data
at almost no cost using the weak annotator. In contrast, we have only a limited amount of observations from the true function, so:
.
Here, we assume the student to be a neural network and teacher to be a Bayesian function approximator. The training process consists of three phases (Illustrated in the above figure):
- Step 1: Pre-train the student on
using weak labels generated by the weak annotator.
The main goal of this step is to learn a task-dependent representation of the data as well as pretraining the student. The student function is a neural network consisting of two parts. The first partlearns the data representation and the second part
performs the prediction task (e.g. classification). Therefore the overall function is
. The student is trained on all samples of the weak dataset
. For brevity, in the following, we will refer to both data sample
and its representation
by
when it is obvious from the context.
From the self-supervised feature learning point of view, we can say that representation learning in this step is solving a surrogate task of approximating the expert knowledge, for which a noisy supervision signal is provided by the weak annotator. - Step 2: Train the teacher on the strong data
represented in terms of the student representation
and then use the teacher to generate a soft dataset
consisting of
for all data samples.
We use a Gaussian process as the teacher to capture the label uncertainty in terms of the student representation, estimated w.r.t the strong data. A prior mean and co-variance function is chosen for. The learned embedding function
in Step 1 is then used to map the data samples to dense vectors as input to the
. We use the learned representation by the student in the previous step to compensate lack of data in
and the teacher can enjoy the learned knowledge from the large quantity of the weakly annotated data. This way, we also let the teacher to see the data through the lens of the student.
Let's call the generated labels by the teacher as soft labels. Therefore, we refer toas the soft dataset. Note that we train
only on the strong dataset
but then use it to generate soft labels
and uncertainty
for samples belonging to
1.
- Step 3: Fine-tune the weights of the student network on the soft dataset, while modulating the magnitude of each parameter update by the corresponding teacher-confidence in its label.
The student network of Step 1 is fine-tuned using samples from the soft datasetwhere
. The corresponding uncertainty
of each sample is mapped to a confidence value (which is going to be explained how in a minute!), and this is then used to determine the step size for each iteration of the stochastic gradient descent (SGD). So, intuitively, for data points where we have true labels, the uncertainty of the teacher is almost zero, which means we have high confidence and a large step-size for updating the parameters. However, for data points where the teacher is not confident, we down-weight the training steps of the student. This means that at these points, we keep the student function as it was trained on the weak data in Step 1.More specifically, we update the parameters of the student by training on
using SGD:
where
is the per-example loss,
is the total learning rate,
is the size of the soft dataset
,
is the parameters of the student network, and
is the regularization term. %Regularization term is the usual regularization used by optimization packages (e.g. weight decay). Therefore, we do not go into its details here. We define the total learning rate as
, where
is the usual learning rate of our chosen optimization algorithm that anneals over training iterations, and
is a function of the label uncertainty
that is computed by the teacher for each data point. Multiplying these two terms gives us the total learning rate. In other words,
represents the fidelity (quality) of the current sample, and is used to multiplicatively modulate
. Note that the first term does not necessarily depend on each data point, whereas the second term does. We propose
(1)
to exponentially decrease the learning rate for data point
if its corresponding soft label
is unreliable (far from a true sample). In Equation1,
is a positive scalar hyper-parameter. Intuitively, small
results in a student which listens more carefully to the teacher and copies its knowledge, while a large
makes the student pay less attention to the teacher, staying with its initial weak knowledge. More concretely speaking, as
student places more trust in the labels
estimated by the teacher and the student copies the knowledge of the teacher. On the other hand, as
, the student puts less weight on the extrapolation ability of
and the parameters of the student are not affected by the correcting information from the teacher.
A toy problem
Let's apply the FWL a one-dimensional toy problem to illustrate the various steps of it.
Let be the true function (red dotted line in the plot
in the figure below) from which a small set of observations
is provided (red points in the plot
in the figure below). These observations might be noisy, in the same way that labels obtained from a human labeler could be noisy.
A weak annotator function (magenta line in the plot
in the figure below) is provided, as an approximation to
. The task is to obtain a good estimate of
given the set
of strong observations and the weak annotator function
. We can easily obtain a large set of observations
from
with almost no cost (magenta points in the plot
in the figure below).



We consider two experiments:
- A neural network trained on weak data and then fine-tuned on strong data from the true function, which is the most common semi-supervised approach (plot
in the figure above).
- A teacher-student framework working by the proposed FWL approach.
As can be seen in plot in the figure above, FWL by taking into account label confidence, gives a better approximation of the true hidden function. We repeated the above experiment 10 times. The average RMSE with respect to the true function on a set of test points over those 10 experiments for the student, were as follows:
- Student is trained on weak data (blue line in plot
in the figure above):
,
- Student is trained on weak data then fine-tuned on true observations (blue line in plot
in the figure above):
.
- Student is trained on weak data, then fine-tuned by soft labels and the confidence information provided by the teacher (blue line in plot
in the figure above):
(best).
More details of the neural network and along with the specification of the data used in the above experiment are in the paper.
That was the general idea of FWL, to see how it works for real-world tasks, like sentiment classification or document ranking, you can take a look at our paper:
- Mostafa Dehghani, A Mehrjou, S Gouws, J Kamps, B Schölkopf. "Fidelity-Weighted Learning", In Proceedings of Sixth International Conference on Learning Representations, (ICLR'18).
- In practice, we furthermore divide the space of data into several regions and assign each region a separate
trained on samples from that region. This leads to a better exploration of the data space and makes use of the inherent structure of data. The algorithm called clustered
gave better results compared to a single GP. Check the paper for more details.