Category Archives: machine learning

Learning to LSTM

What is this

This is about time-series prediction/classification with neural networks using Keras.

Overly long intro

Recently I have been trying to learn to use LSTM (Long Short Term Memory) networks. I picked up the Kaggle VSB Power Line Fault Detection competition. The point of this competition was to classify power line signals as faulty or not faulty. The given data looks like this:

signal plot

Just based on the raw signal data, the challenge in this competition was to classify the signal as fault/not faulty.

As shown in the figure above, there are 800k values per signal in the data. There are 8712 signals in the training set. These signals are for 3-phase power measures. Always 3 signals together to form a single 3-phase "measurement" or observation. The total number of such 3-phase groups is 8712/3 = 2904. The data looks something like this:

train_meta = pd.read_csv("../input/metadata_train.csv")
#There are 3 rows/signals per measurement, so 6 prints first 2
train_meta.head(6)

The shape of the (training) data is 800k rows and 8712 columns, one column for each signal, one row for each signal value measured in 20 millisecond intervals.

df_sig.shape

(800000, 8712)

Few rows for the two of the first 3-signal sets (columns 1-3 and 4-6):

Goal is to use this raw data to identify fault patterns. LSTM networks were very popular in this competition as the data is a set of 8172 time-series instances. So good place to learn how to use LSTM.

I like Kaggle in general for this, as there are good kernels to get started, and discussion on what works. Of course, I always do poor on the competitions but I find it good practice anyway.

Input shape

One thing I found confusing, and based on my internet searches, many others too, is how to shape and form the input for the LSTM network. To train it, and to run the "predictions".

The input shape should be a 3-dimensional array of (number of observations, number of timesteps, number of features). Using the data from above, what would that be?

Each signal measurement would be one observation. In the way above data is structured, this would be 8712 observations, 800000 timesteps, and 1 feature (the raw value). So input shape would be (8712, 800000, 1).

This is because we have:

  • 8712 separate signal observations,
  • 800000 measurements for each signal (at 20ms intervals, so a time-series),
  • 1 feature, which is just the raw measurement value for the signal.

Number of timesteps

I initially tried to train the network with this setup. Didn’t go very well. Found some links that say it should be much less, such as below 200, max around 250-500, 10 steps or less, or up to 1000 timesteps. Not really sure what, but certainly much less than 800k.

One of the most popular kernels in the competition was using a 160 timestep version. How do you go from 800k timesteps to 160 timesteps? By "binning" the 800k into 160 buckets. The size of such bucket is 800000 / 160 = 5000. In Python this looks something like this:

bkt_count = 160
data_size = 800000
bkt_size = int(data_size/bkt_count)
for i in range(0, data_size, bkt_size):
    # cut data to bkt_size (bucket size)
    bkt_data_raw = data_sig[i:i + bkt_size]
    #now here bkt_data_raw needs to be summarized

The above would process the data into a set of 160 buckets, so producing the 160 timesteps.

Features

We can then calculate various summary statistics for these "buckets" as features, for example:

bkt_data_raw = data_sig[i:i + bkt_size]
bkt_avg_raw = bkt_data_raw.mean() #1
bkt_sum_raw = bkt_data_raw.sum() #1
bkt_std_raw = bkt_data_raw.std() #1
bkt_std_top = bkt_avg_raw + bkt_std_raw #1
bkt_std_bot = bkt_avg_raw - bkt_std_raw #1
bkt_percentiles = np.percentile(bkt_data_raw, [0, 1, 25, 50, 75, 99, 100]) #7

The above gives 1+1+1+1+1+7 = 12 features. I had a few more features, and experimented with various others, but this works to illustrate the concept.

So, assume we have the 160 timesteps with 12 summary features each. This would mean the LSTM input shape is now: (8712, 160, 12).

Transforming the input to 3-dimensions

Assume we loop through all the data (800k values) for a signal, and turn each into 160 rows with 12 features. How do we turn that into (8712, 160, 12) for the LSTM? The key (for me) is in the numpy.reshape() function. We can do something like this:

X = df_train.values.reshape(measures, timesteps, total_features)

This works nicely because underneath most things in Python ML libraries rely on Numpy arrays (or arrays of arrays etc.). Assuming that df_train in the code above has a number of values that is divisible by the given reshape sizes, Numpy will reshape (as the name says..) it into a set of multi-dimensional arrays as requested.

In the above the number of elements for one signal is 160*12=1920. Because a signal now has 160 timesteps (buckets) each with 12 features. Since we know the number of rows (measured signals) we expect to have, the total can be calculated as 8172*1920 = 15690240.

There are many ways to build the initial set of features. I started with putting all the features for a timestep on a single row. So I had rows with 1920 columns. 8172 such rows. Reshaping in the end worked fine and produced the correct shape of (8172,160,12).

However, with such a shape, it was a bit difficult to apply Pandas operations on the data, such as scaling all the features using sklearn scalers. Pandas operations expect a dataframe to have 2-dimensional data, where each column has data for a single feature. For example, the scalers then scale one column at a time. With 1920 columns, each feature data was at every 12th column.

To address this, I ended up with data in the format of 160 * 8172 = 1307520 rows. Each row with N columns, where N equals the number of features I had. In this case 12. The 160 timesteps for each signal were each timestep on a single row, and 160 rows following each other for one signal. Immediately followed by another 160 for the next signal, and so on. Final size in this case being a dataframe with a shape of (1307520,12). As calculated above this is the 160 timesteps for all 8172 rows, meaning 160 * 8172 = 1307520 rows.

This would look something like this for the first signal:

and for the second signal:

and so on for all 8172 signals in chunks of 160 rows.. So each feature in its own column, each timestep on its own row.

Since all the features are in the same column for all the signals and all timesteps, general transformation tools such as sklearn scalers can be applied very simply:

minmax = MinMaxScaler(feature_range=(-1,1))
df_train_scaled = pd.DataFrame(minmax.fit_transform(df_train))

This scales all the features for all timesteps and all observations (signals here) at once correctly. Well, I think it does anyway, so correct me if wrong :).

The scaled version for the 1st signal:

And to reshape it into the 3-dimensional array shaped (8172, 160, 12):

observations = 8172
timesteps = 160
total_features = 12
X = df_train_scaled.values.reshape(observations, timesteps, total_features)

The dataframe.values is a Numpy array, so the Numpy.reshape function is there.

X would now be suitable to use as input for an LSTM network. The one I have not shown how to build yet. Only a million words and stuff so far but no LSTM and this was to be all about LSTM. Nice. Good job.

Parallelizing pre-processing

On Kaggle, I eventually built my own kernel for this pre-processing to create features and scale them, and to save the results for later model experiments. It uses Python multiprocessing to create the buckets and features. I found this to have multiple benefits:

  • The transformation of the raw input data can be done once and used many times to experiment with different LSTM and other ML models and NN architectures. Since pre-processing large datasets takes time, this speeds up the process immensely.

  • Pandas is built to be single-core (as is Python itself..), so you have to do tricks like this or else your multiple cores are just sitting idly and wasted.

  • Kaggle specific: By running preprocessing in a separate kernel, I can run it in parallel in one kernel while experimenting with models in other kernels.

  • Kaggle specific: Kaggle CPU kernels have 4 CPU cores, allowing 2*faster preprocessing than in GPU kernels which have only 2 CPU cores. But you need GPU kernels to build LSTM models.

In some parallel architectures like PySpark this would be less of a problem, but I do not have access to such systems, so I work with what I have, huh. Maybe someday if I am rich enough or whatever. Plz send food, beer, monies, and epic computer parts, thx.

Building the LSTM model

I made several attempts at building a model. Here is one quite directly based on the popular Kaggle kernel I linked at the beginning:

def create_model4(input_data):
    input_shape = input_data.shape
    inp = Input(shape=(input_shape[1], input_shape[2],), name="input_signal")
    x = Bidirectional(CuDNNLSTM(128, return_sequences=True, name="lstm1"), name="bi1")(inp)
    x = Bidirectional(CuDNNLSTM(64, return_sequences=False, name="lstm2"), name="bi2")(x)
    x = Dense(128, activation="relu", name="dense1")(x)
    x = Dense(64, activation="relu", name="dense2")(x)
    x = Dense(1, activation='sigmoid', name="output")(x)
    model = Model(inputs=inp, outputs=x)
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[matthews_correlation])
    return model

Few points here:

  • CuDNNLSTM is a specialized LSTM layer optimized for NVidia GPU’s. I always though Keras would come with automatically chosen optimal GPU implementation and pick it automatically based on backend. Seems not.

  • CuDNN is NVIDIA’s Deep Neural Network library.

  • Bidirectional LSTM refers to a network using information about "past and future". This is only useful if you actually know the future, as in translating text from one language to another and you know what words will follow the current word. In this case, I know the whole signal, so the LSTM can also look at what values follow the current "bin" value in the following bins (timesteps).

  • The "return_sequences" flag tells the layer to give out a similar 3D array as the original input. So each of the 128 cells in the first layer will not just output on value for the sequence but also output an intermediate value for each timestep. So instead of shape (128), the output would be (160, 128).

  • The second LSTM layer outputs a shape of (64), which is just the final output of the LSTM processing the timesteps. This is suitable shape for the following dense layer, since it is just an array.

But let’s see how the model actually looks like.

Inspecting the model

First, a one-way model to keep it a bit simpler still:

def create_model3(input_data):
    input_shape = input_data.shape
    inp = Input(shape=(input_shape[1], input_shape[2],), name="input_signal")
    x = CuDNNLSTM(128, return_sequences=True, name="lstm1")(inp)
    x = CuDNNLSTM(64, return_sequences=False, name="lstm2")(x)	    
    x = Dense(128, activation="relu", name="dense1")(x)
    x = Dense(64, activation="relu", name="dense2")(x)
    x = Dense(1, activation='sigmoid', name="output")(x)
    model = Model(inputs=inp, outputs=x)
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[matthews_correlation])
    return model

Summarizing the model:

model.summary()

Gives:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_signal (InputLayer)    (None, 160, 12)           0         
_________________________________________________________________
lstm1 (CuDNNLSTM)            (None, 160, 128)          72704     
_________________________________________________________________
lstm2 (CuDNNLSTM)            (None, 64)                49664     
_________________________________________________________________
dense1 (Dense)               (None, 128)               8320      
_________________________________________________________________
dense2 (Dense)               (None, 64)                8256      
_________________________________________________________________
output (Dense)               (None, 1)                 65        
=================================================================
Total params: 139,009
Trainable params: 139,009
Non-trainable params: 0
_________________________________________________________________

And to visualize this (in Jupyter):

from keras.utils import plot_model
plot_model(model, show_shapes=True, to_file="lstm.png")
from IPython.display import Image
Image("lstm.png")

Giving us:

This shows how the LSTM parameters affect the basic structure of the network.

  • input_signal: layer shows the shape of the input data: 160 time steps, 12 features. None for any number of rows (observations).

  • lstm1: 128 LSTM units, with return_sequences=True. Now the output is (None, 160, 128), where 128 matches the number of LSTM units, and replaces the number of features in the input. So 128 features, each one produced by a single LSTM "unit".

  • lstm2: 64 LSTM units, with return_sequences=False. Output shape is (None, 64). Outputting values at every timestep is disabled (return_sequences=False), so this is just 2 dimensional output. 64 features, one for each LSTM "unit". This for each row in the input as it comes through lstm1.

  • dense1, dense2, output: regular dense network layers.

And the same for a bi-directional LSTM:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_signal (InputLayer)    (None, 160, 60)           0         
_________________________________________________________________
bi1 (Bidirectional)          (None, 160, 256)          194560    
_________________________________________________________________
bi2 (Bidirectional)          (None, 128)               164864    
_________________________________________________________________
dense1 (Dense)               (None, 128)               16512     
_________________________________________________________________
dense2 (Dense)               (None, 64)                8256      
_________________________________________________________________
output (Dense)               (None, 1)                 65        
=================================================================
Total params: 384,257
Trainable params: 384,257
Non-trainable params: 0
_________________________________________________________________

So bi-directional doubles the number of features the LSTM layers produce, as it goes both forward and backward across the timesteps.

Training and predicting

Training and predicting with the LSTM model is no different from others, but a quick look anyway.

Training:

splits = list(StratifiedKFold(n_splits=N_SPLITS, shuffle=True, random_state=2019).split(X, y))
preds_val = []
y_val = []
for idx, (train_idx, val_idx) in enumerate(splits):
    K.clear_session()
    print("Beginning fold {}".format(idx+1))
    train_X, train_y, val_X, val_y = X[train_idx], y[train_idx], X[val_idx], y[val_idx]
    model = model_lstm(train_X.shape)
    ckpt = ModelCheckpoint('weights_{}.h5'.format(idx), save_best_only=True, save_weights_only=True, verbose=1, monitor='val_matthews_correlation', mode='max')
    model.fit(train_X, train_y, batch_size=128, epochs=50, validation_data=[val_X, val_y], callbacks=[ckpt])
    model.load_weights('weights_{}.h5'.format(seed, idx))

Prediction:

preds = []
for i in range(N_SPLITS):
    model.load_weights('weights_{}.h5'.format(i))
    pred = model.predict(X_test_input, batch_size=300, verbose=1)
    pred_3 = []
    for pred_scalar in pred:
        for i in range(3):
            pred_3.append(pred_scalar)
    preds.append(pred_3)
threshold = 0.5
preds_test = (np.squeeze(np.mean(preds_test, axis=0)) > threshold).astype(np.int)
submission['target'] = preds
submission.to_csv('submission_{}.csv'.format(seed), index=False)
submission.head()

With this, I managed to train the LSTM and have it produce actually meaningful results. Not that I got anywhere in the competition but it was a good experience for me, since my first tries I produced some completely broken input data, formatted wrong and all. The training with that broken data format delivered zero results, which might be expected since the features were all wrong, and data for them was mixed up. I feel I mostly learned to use an LSTM, even if I failed the Kaggle competition for the smallest fractions of target metric.

From all this, I wandered a bit to look at other things, such as what is the actual structure of an LSTM network when build with Keras, how to combine other types of data and layers alongside LSTM in the same network, and what is adversarial validation. Maybe I will manage to write a bit about them next.

Advertisements

My Pandas Cheat Sheet

https://stackoverflow.com/questions/36794433/python-using-multiprocessing-on-a-pandas-dataframe http://www.racketracer.com/2016/07/06/pandas-in-parallel/

  import pandas as pd

  pd.__version__

Creating Dataframes

  df = pd.DataFrame()

select multiple columns as a dataframe from a bigger dataframe:

  df2 = df[['Id', 'team', 'winPlacePerc']]

select a single column as a dataframe:

  df2 = df[['name']] 
#double square brackets make the results dataframe, 
#single makes it series

pandas axis:

  axis 1 = columns, axis 0 = rows

get a series from a dataframe column filtered by another column:

  zero_names = df[df["weights"] < 10000]["names"]

turn it into a list:

  zn_list = zero_names.tolist()

N-way split:

  X_train, X_test, y_train, y_test, r_train, r_test = 
   model_selection.train_test_split(X, y, r, 
      test_size=0.25, random_state=99)

load only part of data: nrows

Accessing Values

find row at 1111th index in dataframe:

  df.iloc[1111]

find row with value 1111 for index in dataframe:

  df.loc[1111]

set value in cell (stackoverflow):

  children_df.at[555, "visited"] = True 
    #555 here refers to row index

modify row while iteration over it:

  for index, row in df.iterrows():
    df.at[index,'open_change_p1'] = day_change_op1

drop last 5 columns

  df_train.drop(df_train.columns[-5:], axis=1)

print numpy data types in multidimensional array (in jupyter return value is printed):

[type(row) for row in values[0]]

Aggregates

calculate mean for ‘team’ and ‘winPlacePerc’ columns, after grouping them by match id and group id:

  agg_cols = ['groupId', 'matchId', 'team', 'winPlacePerc']
  df_mean = df_test[agg_cols].groupby(["groupId", "matchId"],
    as_index=False).mean().add_suffix("_mean")

run multiple such aggregations at once:

  agg = df_train.groupby(['matchId'])
    .agg({'players_in_team': ['min', 'max', 'mean', 'median']})

specify the name suffixes of the generated aggregation column names:

  agg_train = df_train[agg_cols].groupby(["groupId", "matchId"], 
                        as_index=False)
    .agg([('_min', 'min'), ('_max', 'max'), ('_mean', 'mean')])

multiple aggregations will create a 2-level column header (see df.head()). to change it into one level (for merge etc):

  mi = agg_train.columns #mi for multi-index
  ind = pd.Index([e[0] + e[1] for e in mi.tolist()])
  agg_train.columns = ind

custom aggregation function:

  def q90(x):
      return x.quantile(0.9)

  agg = df_train.groupby(['matchId'])
    .agg({'players_in_team': 
       ['min', 'max', 'mean', 'median', q90]})

create new column as number of rows in a group:

  agg = df_train.groupby(['matchId'])
          .size().to_frame('players_in_match')

group and sum column for groups:

  revenues = df_train.groupby("geoNetwork_subContinent")
     ['totals_transactionRevenue'].sum()
     .sort_values(ascending=False)

Merge

merge two dataframes (here df_train and agg) by a single column:

  df_train = df_train.merge(agg, how='left', on=['groupId'])

merge on multiple columns:

  dfg_train = dfg_train.merge(agg_train, how='left',
     on=["groupId", "matchId"])

set merge suffixes = ["", "_rank"] <- left and right side on overlapping columns

  dfg_test = dfg_test.merge(agg_test_rank, suffixes=["", "_rank"],
        how='left', on=["groupId", "matchId"])

above sets columns from dfg_test to have no suffix ("") and columns from agg_test_rank to have suffix "_rank"

merge columns by value:

  df_train['rankPoints'] = np.where(df_train['rankPoints'] < 0,
       df_train['winPoints'], df_train['rankPoints'])

->above sets value as winpoints if rankpoints is below zero, else rankpoints

merge by defining the column names to match on left and right:

  pd.merge(left, right, left_on='key1', right_on='key2')

merge types (the how=merge_type in pd.merge)(link):

  • inner: keep rows that match in both left and right
  • outer: keep all rows in both left and right
  • left: keep all rows from left and matching ones from right
  • right: keep all rows from right and matching ones from left

Rename, Delete, Compare Dataframes

rename columns to remove a suffix (here remove _mean):

  df_mean = df_mean.rename(columns={'groupId_mean': 'groupId',
               'matchId_mean': 'matchId'}) 

delete dataframes to save memory:

  del df_agg

find all columns in one dataframe but not in another

  diff_set = set(train_df.columns).difference(set(test_df.columns))
  print(diff_set)

find all columns in one both dataframes and drop them

  common_set = set(train_df.columns).intersection(set(FEATS_EXCLUDED))
  train_df = train_df.drop(common_set, axis=1)

drop rows with index in list (stackoverflow):

  df.drop(df.index[[1,3]])

replace nulls/nans with 0:

  X.fillna(0)
  X_test.fillna(0)

only for specific columns:

  df[['a', 'b']] = df[['a','b']].fillna(value=0)

drop rows with null values for specific column:

  df_train.dropna(subset=['winPlacePerc'], inplace=True)

drop columns B and C:

  df.drop(['B', 'C'], axis=1)
  df.drop(['B', 'C'], axis=1, inplace = True)

Dataframe Statistics

this df has 800k rows (values) and 999 columns (features):

  df_train_subset.shape
  (800000, 999)

data types:

  df_train.dtypes

number of rows, columns, memory size (light, fast):

  df.info()

statistics per feature/column (cpu/mem heavy):

  df.describe()

number of unique values:

  df.nunique()

bounds:

  df.min()
  df.max()

replace infinity with 0. esp scalers can have issues with inf:

    X[col] = X[col].replace(np.inf, 0)

replace positive and negative inf with nan:

  df_pct.replace([np.inf, -np.inf], np.nan)

number of non-nulls per row in dataframe:

  df_non_null = df.notnull().sum(axis=1)

find the actual rows with null values:

  df_train[df_train.isnull().any(axis=1)]

number of unique values

  df.nunique()

number of times different values appear in column:

  df_train["device_operatingSystem"].value_counts()

sorted by index (e.g., 0,1,2,…)

  phase_counts = df_tgt0["phase"].value_counts().sort_index()

number of nans per column in dataframe:

  df.isnull().sum()

find columns with only one unique value

  const_cols = [c for c in train_df.columns 
                if train_df[c].nunique(dropna=False)==1 ]

drop them

  train_df = train_df.drop(const_cols, axis=1)

replace outlier values over 3std with 3std:

  upper = df_train['totals_transactionRevenue'].mean()
            +3*df_train['totals_transactionRevenue'].std()
  mask = np.abs(df_train['totals_transactionRevenue']
           -df_train['totals_transactionRevenue'].mean()) >
           (3*df_train['totals_transactionRevenue'].std())
  df_train.loc[mask, 'totals_transactionRevenue'] = upper

or use zscore:

  df['zscore'] = (df.a - df.a.mean())/df.a.std(ddof=0)

from scipy (stackoverflow)

  from scipy import stats
  import numpy as np
  z = np.abs(stats.zscore(boston_df))
  print(z)

show groupby object data statistics for each column by grouped element:

  grouped.describe()

create dataframe from classifier column names and importances (where supported), sort by weight:

  df_feats = pd.DataFrame()
  df_feats["names"] = X.columns
  df_feats["weights"] = clf.feature_importances_
  df_feats.sort_values(by="weights")

lag columns to show absolute value they changed over rows:

  for col in input_col_names:
    df_diff[col] = df_sig[col].diff().abs()

unique datatypes in dataframe:

  train_df.dtypes.unique()

show columns of selected type:

  train_df.select_dtypes(include=['float64'])

Numpy

Get numpy matrix from dataframe:

  data_diffs = df_diff.values

get column from numpy matrix as row

  for sig in range(0, 3):
      data_sig = data_measure[:,sig]

slice numpy array:

  bin_data_raw = data_sig[i:i + bin_size]

calculate statistics over numpy array:

  bin_avg_raw = bin_data_raw.mean()
  bin_sum_raw = bin_data_raw.sum()
  bin_std_raw = bin_data_raw.std()
  bin_percentiles = np.percentile(bin_data_raw, [0, 1, 25, 50, 75, 99, 100])
  bin_range = bin_percentiles[-1] - bin_percentiles[0]
  bin_rel_perc = bin_percentiles - bin_avg_raw

count outliers at different scales:

  outliers = []
  for r in range(1, 7):
      t = bin_std_diff * r
      o_count = np.sum(np.abs(bin_data_diff-bin_avg_diff) >= t)
      outliers.append(o_count)

concatenate multiple numpy arrays into one:

  bin_row = np.concatenate([raw_features, diff_features, bin_percentiles, bin_rel_perc, outliers])

limit values between min/max:

  df_sig.clip(upper=127, lower=-127)

value range in column (ptp = peak to peak):

  df.groupby('GROUP')['VALUE'].agg(np.ptp)

replace nan:

  my_arr[np.isnan(my_arr)] = 0

Time-Series

create values for percentage changes over time (row to row):

  df_pct = pd.DataFrame()
  for col in df_train_subset.columns[:30]:
      df_pct['pct_chg_'+col] =
          df_train_subset[col].pct_change().abs()

pct_change() gives the change in percentage over time, abs() makes it absolute if just looking for overall change as in negative or positive.

also possible to set number of rows to count pct_change over:

  df_pct['pct_chg_'+col] =
          df_train_subset[col].pct_change(periods=10)

pct_change on a set of items with specific values:

  df['pct_chg_open1'] = df.groupby('assetCode')['open']
           .apply(lambda x: x.pct_change())

TODO: try different ways to calculate ewm to see how this all works ewm ([stackoverflow]( EWMA: https://stackoverflow.com/questions/37924377/does-pandas-calculate-ewm-wrong)):

  df["close_ewma_10"] = df.groupby('assetName')['pct_chg_close1']
            .apply(lambda x: x.ewm(span=10).mean())

shift by 10 backwards

  y = mt_df["mket_close_10"].shift(-10)

Date Types

convert seconds into datetime

  start_col = pd.to_datetime(df_train.visitStartTime, unit='s')

parse specific columns as specific date types:

  df_train = pd.read_csv("../input/train-flat.csv.gz",
                         parse_dates=["date"], 
                         dtype={'fullVisitorId': 'str',
                              'date': 'str'
                             },
                        )

or if need to parse specific format:

  pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')

access elements of datetime objects:

  data_df["weekday"] = data_df['date'].dt.weekday
  data_df["hour"] = data_df['date'].dt.hour

Data Type Manipulations

one-hot encode / convert categorical:

  dfg_train = pd.get_dummies( dfg_train, columns = ['matchType'] )
  dfg_test = pd.get_dummies( dfg_test, columns = ['matchType'] )

set category value by range:

  here 1 = solo game, 2 = duo game, 3 = squad, 4 = custom
  df_train['team'] = 
     [1 if i == 1 else 2 if i == 2 else 4 if i > 4 else 3 
              for i in df_train["players_in_team_q90"]]

calculate value over two columns and make it a new column:

  dfg_train['killsNorm'] = 
      dfg_train['kills_mean']*
      ((100-dfg_train['players_in_match'])/100 + 1)

  data_df['hit_view_ratio'] =
      data_df['totals_pageviews']/data_df['totals_hits']

mark a set of columns as category type:

  for col in X_cat_cols:
      df_train[col] = df_train[col].astype('category')

set category value based on set of values in column:

  X_test['matchType'] = X_test['matchType'].astype('str')
  X_test.loc[X_test['matchType'] == "squad-fpp", 'matchType'] =
     "squad"
  X_test['matchType'] = X_test['matchType'].astype('category')

how to get the indices from a dataframe as a list (e.g., collect a subset):

  list(outlier_collection.index.values)

to see it all on one line in Jupyter (easier copying, e.g. to drop all in list):

print(list(outlier_collection.index.values))

Jupyter

show dataframe as visual table in notebook (stackoverflow):

  from IPython.display import display, HTML

  display(df1)
  display(HTML(df2.to_html()))

pycharm notebook (jetbrains help):

correlations, unstacking correlation matrix link

Memory Reducer (From Kaggler:

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() / 1024**2
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object and col_type.name != 'category':
            #print(col_type)
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() / 1024**2
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    
    return df

Investigating ROC/AUC

Executive Summary

ROC and AUC are terms that often come up in machine learning, in relation to evaluating models. In this post, I try examine what ROC curves actually are, how they are calculated, what is a threshold in ROC curve, and how it impacts the classification if you change it. The results show that using low threshold values leads to classifying more objects into positive category, and high threshold leads to more negative classifications. The Titanic dataset is used here as an example to study the threshold effect. A classifier gives probabilities for people to survive.

By default, the classifier seems to use 50% probability as a threshold to classify one as a survivor (positive class) or non-survivor (negative class). But this threshold can be varied. For example, using a threshold of 1% leads to giving a survivor label to anyone who gets a 1% or higher probability to survive by the classifier. Similarly, a threshold of 95% requires people to get at least 95% probability from the classifier to be given a survivor label. This is the threshold that is varied, and the ROC curve visualizes how the change in this probability threshold impacts classification in terms of getting true positives (actual survivors predicted as survivors) vs getting false positives (non-survivors predicted as survivors).

For the loooong story, read on. I dare you. ūüôā

Introduction

Area under the curve (AUC) and receiver operator characteristic (ROC) are two terms that seem to come up a lot when learning about machine learning. This is my attempt to get it. Please do let me know where it goes wrong..

AUC is typically drawn as a curve using some figure like this (from Wikipedia):

Roccurves
FIGURE 1. Example ROC/AUC curve.

It uses true positive rate (TPR) and false positive rate (FPR) as the two measures to compare. A true positive (TP) being a correct positive prediction and a false positive (FP) being a wrong positive prediction.

There is an excellent introduction to the topic in the often cited An introduction to ROC analysis article by Tom Fawcett. Which is actually a very understandable article (for most part), unlike most academic articles I read. Brilliant. But still, I wondered, so lets see..

To see ROC/AUC in practice, instead of just reading about it all over the Internet, I decided to implement a ROC calculation and see if I can match it with the existing implementations. If so, that should at least give me some confidence on my understanding on the topic being correct.

Training a classifier to test ROC/AUC

To do this, I needed a dataset and a fitting of some basic classifier on that data to run my experiments. I have recently been going through some PySpark lectures on Udemy, so I could maybe learn some more big data stuffs and get an interesting big data job someday. Woohoo. Anyway. The course was using the Titanic dataset, so I picked that up, and wrote a simple classifier for it in Pandas/Scikit. Being a popular dataset, there is also a related Kaggle for it. Which is always nice, allowing me to create everything a bit faster by using the references, and focus on the ROC investigation faster.

The classifier I used is Logistic Regression, and the notebook is available on my GitHub (GitHub sometimes seems to have issues rendering but its there, will look into putting also on Kaggle later). Assuming some knowledge of Python, Pandas, sklearn, and the usual stuff, the training of the model itself is really the same as usual:

X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=2)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)

The above is of course just a cut off into the middle of the notebook, where I already pre-processed the data to build some set of features into the features variable, and took the prediction target (survived) into the target variable. But that is quite basic ML stuff you find in every article on the topic, and on many of the “kernels” on the Kaggle page I linked. For more details, if all else fails, check my notebook linked above.

The actual training of the classifier (after preprocessings..) is as simple as the few lines above. That’s it. I then have a Logistic Regression classifier I can use to try and play with ROC/AUC. Since ROC seems to be about playing with thresholds over predicted class probabilities, I pull out the target predictions and their given probabilities for this classifier on the Titanic dataset:

predictions = logreg.predict(X_test)
predicted_probabilities = logreg.predict_proba(X_test)

Now, predicted_probabilities holds the classifier predicted probability values for each row in the dataset. 0 for never going to survive, 1 for 100% going to survive. Just predictions, of course, not actual facts. That’s the point of learning a classifier, to be able to predict the result for instance where you don’t know in advance, as you know.. ūüôā

Just to see a few things about the data:

predicted_probabilities.shape

>(262, 2)

X_test.shape

>(262, 9)

This shows the test set has 262 rows (data items), there are 9 feature variables I am using, and the prediction probabilities are given in 2 columns. The classifier is a binary classifier, giving predictions for a given data instance as belonging to one of the two classes. In this case it is survived and not survived. True prediction equals survival, false prediction equals non-survival. The predicted_probability variable contains probabilities for false in column 0 and true in column 1. Since probability of no survival (false) is the opposite of survival (1-survival_probability, (true)), we really just need to keep one of those two columns. Because if it is not true, it has to be false. Right?

Cutting the true/false predictions to only include the true prediction, or the probability of survival (column 1, all rows):

pred_probs = predicted_probabilities[:, 1]

Drawing some ROC curves

With this, getting the ROC curve from sklearn is as simple as:

[fpr, tpr, thr] = roc_curve(y_test, pred_probs)

The Kaggle notebook I used as a reference, visualizes this and also drew fancy looking two dashed blue lines to illustrate a point on the ROC curve, where 95% of the surviving passengers were identified. This point is identified by:

# index of the first threshold for which the TPR > 0.95
idx = np.min(np.where(tpr > 0.95))

So it is looking for the minimum index in the TPR list, where the TPR is above 95%. This results in a ROC curve drawing as:

plt.figure(figsize=(10, 6), dpi=80)
plt.plot(fpr, tpr, color='coral', label='ROC curve (area = %0.3f)' % auc(fpr, tpr))
plt.plot([0, 1], [0, 1], 'k--')
plt.plot([0,fpr[idx]], [tpr[idx],tpr[idx]], 'k--', color='blue')
plt.plot([fpr[idx],fpr[idx]], [0,tpr[idx]], 'k--', color='blue')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate (1 - specificity)', fontsize=14)
plt.ylabel('True Positive Rate (recall)', fontsize=14)
plt.title('Receiver operating characteristic (ROC) curve')
plt.legend(loc="lower right")
plt.show()

The resulting figure is:

roc0
FIGURE 2. The ROC curve I nicked from Kaggle.

How to read this? In bottom left we have zero FTP, and zero TPR. This means everything is predicted as false (no survivors). This would have threshold of 100% (1.0), meaning you just classify everything as false because no-one gets over 100% probability to survive. In top right corner both TPR and FPR are 1.0, meaning you classify everyone as a survivor, and no-one as a non-survivor. So false positives are maximized as are true positives, since everyone is predicted to survive. This is a result of a threshold of 0%, and everyone gets a probability higher than 0% to survive. The blue lines indicate a point where over 95% of true positives are identified (TPR value), simultaneously leading to getting about 66% FPR. Of course, the probability threshold is not directly visible in this but has to be otherwise looked up, as we shall see in all the text I wrote below.

To see the accuracy and AUC calculated for the base (logistic regression) classifier, and the AUC calculated for the ROC in the figure above:

print(logreg.__class__.__name__+" accuracy is %2.3f" % accuracy_score(y_test, predictions))
print(logreg.__class__.__name__+" log_loss is %2.3f" % log_loss(y_test, pred_probs))
print(logreg.__class__.__name__+" auc is %2.3f" % auc(fpr, tpr))

>LogisticRegression accuracy is 0.832
>LogisticRegression log_loss is 0.419
>LogisticRegression auc is 0.877

And some more statistics from sklearn to compare against when trying to understand it all:

confusion_matrix(y_test, predictions)

>array([[156,  19],
>       [ 25,  62]])

precision_score(y_test, predictions)

>0.7654320987654321

accuracy_score(y_test, predictions)

>.8320610687022901

In the confusion matrix, there are 156 correct non-survivor predictions (true negatives), 19 wrong ones (false negatives). 62 correct survivor predictions (true positives), 25 wrong ones (false positives). The precision of the classifier is given as 0.7654320987654321 and accuracy as 0.8320610687022901.

To evaluate my ROC/AUC calculations, I first need to try to understand how the algorithms calculate the metrics, and what are the parameters being passed around. I assume the classification algorithms would use 0.5 (or 50%) as a default threshold to classify a prediction with a probability of 0.5 or more as 1 (predicted true/survives) and anything less than 0.5 as 0 (predicted non-survivor/false). So let’s try to verify that. First, to get the predictions with probability values >= 0.5 (true/survive):

#ss is my survivor-set
ss1 = np.where(pred_probs >= 0.5)
print(ss1)

>(array([  0,   1,   3,   4,   8,  13,  16,  26,  27,  30,  32,  42,  43,
>         46,  49,  50,  57,  59,  66,  67,  76,  77,  79,  83,  84,  85,
>         86,  92,  93,  98, 101, 109, 111, 113, 117, 121, 122, 125, 129,
>        130, 133, 137, 140, 143, 145, 147, 148, 149, 150, 152, 158, 170,
>        175, 176, 177, 180, 182, 183, 184, 186, 201, 202, 204, 205, 206,
>        211, 213, 215, 218, 227, 230, 234, 240, 241, 242, 246, 248, 249,
>        250, 252, 256]),)

len(ss1[0])

>81

The code above filters the predicted probabilities to get a list of values where the probability is higher than or equal to 0.5. At first look, I don’t know what the numbers in the array are. However, my guess is that they are indices into the X_train array that was passed as the features to predict from. So at X_train indices 0,1,3,4,8,… are the data points predicted as true (survivors). And here we have 81 such predictions. That is the survivor predictions, how about non-survivor predictions?

Using an opposite filter:

ss2 = np.where(pred_probs 181

Overall, 81 predicted survivors, 181 predicted casualities. Since y_test here has known labels (to test against), we can check how many real survivors there are, and what is the overall population:

y_test.value_counts()

>0    175
>1     87

So the amount of actual survivors in the test set is 87 vs non survivors 175. To see how many real survivors (true positives) there are in the predicted 81 survivors:

sum(y_test.values[i] for i in ss1[0])

>62

The above is just some fancy Python list comprehension or whatever that is for summing up all the numbers in the predicted list indices. Basically it says that out of the 81 predicted survivors, 62 were actual survivors. Matches the confusion matrix from above, so seems I got that correct. Same for non-survivors:

sum(1 for i in ss2[0] if y_test.values[i] == 0)

>156

So, 156 out of the 181 predicted non-survivors actually did not make it. Again, matches the confusion matrix.

Now, my assumption was that the classifier uses the threshold of 0.5 by default. How can I use these results to check if this is true? To do this, I try to match the sklearn accuracy score calculations using the above metrics. Total correct classifications from the above (true positives+true negatives) is 156+62. Total number of items to predict is equal to the test set size, so 262. The accuracy from this is:

(156+62)/262

>0.8320610687022901

This is a perfect match on the accuracy_score calculated above. So I conclude with this that I understood the default behaviour of the classifier correct. Now to use that understanding to see if I got the ROC curve correct. In FIGURE 1 far above, I showed the sklearn generated ROC curve for this dataset and this classifier. Now I need to build my own from scratch to see if I  get it right.

Thresholding 1-99% in 1% increments

To build one a ROC curve myself using my own threshold calculations:

tpr2 = []
fpr2 = []
#do the calculations for range of 1-100 percent confidence, increments of 1
for threshold in range(1, 100, 1):
    ss = np.where(pred_probs >= (threshold/100))
    count_tp = 0
    count_fp = 0
    for i in ss[0]:
        if y_test.values[i] == 1:
            count_tp += 1
        else:
            count_fp += 1
    tpr2.append(count_tp/87.0)
    fpr2.append(count_fp/175.0)
    print("threshold="+str(threshold)+", count_tp="+str(count_tp)+", count_fp="+str(count_fp))

The above code takes as input the probabilities (pred_probs) given by the Logistic Regression classifier for survivors. It then tries the threshold values from 1 to 99% in 1% increments. This should produce the ROC curve points, as the ROC curve should describe the prediction TPR and FPR with different probability thresholds. The code gives a result of:

threshold=1, count_tp=87, count_fp=175
threshold=2, count_tp=87, count_fp=174
threshold=3, count_tp=87, count_fp=174
threshold=4, count_tp=87, count_fp=173
threshold=5, count_tp=87, count_fp=171
threshold=6, count_tp=87, count_fp=169
...
threshold=94, count_tp=3, count_fp=1
threshold=95, count_tp=1, count_fp=0
threshold=96, count_tp=0, count_fp=0
threshold=97, count_tp=0, count_fp=0
threshold=98, count_tp=0, count_fp=0
threshold=99, count_tp=0, count_fp=0

At 1% threshold, everyone that the classifier gave a probability of 1% or higher to survive is classified as a likely survivor. In this dataset, everyone is given 1% or more survival probability. This leads to everyone being classified as a likely survivor. Since everyone is classified as survivor, this gives true positives for all 87 real survivors, but also false positives for all the 175 non-survivors. At 6% threshold, the FPR has gone down a bit, with only 169 non-survivors getting false-positives as survivors.

At the threshold high-end, the situation reverses. At threshold 94%, only 3 true survivors get classified as likely survivors. Meaning only 3 actual survivors scored a probability of 94% or more to survive from the classifier. There is one false positive at 94%, so one who was predicted to have a really high probability to survive (94%) did not. At 95% there is only one predicted survivor, which is a true positive. After that no-one scores 96% or more. Such high outliers would probably make interesting points to look into in more detail, but I won’t go there in this post.

Using the true positive rates and false positive rates from the code above (variables tpr2 and fpr2), we can make a ROC curve for these calculations as:

plt.figure(figsize=(10, 6), dpi=80)
plt.plot(fpr2, tpr2, color='coral', label='ROC curve (area = %0.3f)' % auc(fpr2, tpr2))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=14)
plt.ylabel('True Positive Rate', fontsize=14)
plt.title('ROC test')
plt.legend(loc="lower right")
plt.show()

roc2
FIGURE 3. ROC curve for thresholds of 1-99% in 1% steps.

Comparing this with the ROC figure form sklearn (FIGURE 2), it is a near-perfect match. We can further visualize this by plotting the two on top of each other:

plt.figure(figsize=(10, 6), dpi=80)
plt.plot(fpr2, tpr2, color='coral', label='ROC curve (area = %0.3f)' % auc(fpr2, tpr2))
plt.plot(fpr, tpr, color='blue', label='ROC curve (area = %0.3f)' % auc(fpr, tpr))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=14)
plt.ylabel('True Positive Rate', fontsize=14)
plt.title('ROC test')
plt.legend(loc="lower right")
plt.show()

roc3
FIGURE 4. My 1-99% ROC vs sklearn ROC.

There are some very small differences in the sklearn version having more “stepped” lines, whereas the one I generated from threshold of 1-99% draws the lines a bit more straight (“smoother”). Both end up with the exact same AUC value (0.877). So I would call this ROC curve calculation solved, and claim that the actual calculations match what I made here. Solved as in I finally seem to have understood what it exactly means.

What is the difference

Still, loose ends are not nice. So what is causing the small difference in the ROC lines from the sklearn implementation vs my code?

The FPR and TPR arrays/lists form the x and y coordinates of the graphs. So looking at those might help understand a bit:

fpr.shape

>(95,)

len(fpr2)

>99

The above shows that the sklearn FPR array has 95 elements, while the one I created has 99 elements. Initially I thought, maybe increasing detail in the FPR/TPR arrays I generate would help match the two. But it seems I already generated more points than is in the sklearn implementation. Maybe looking into the actual values helps:

fpr

array([0.        , 0.        , 0.00571429, 0.00571429, 0.01142857,
       0.01142857, 0.01714286, 0.01714286, 0.02285714, 0.02285714,
       0.02857143, 0.02857143, 0.03428571, 0.03428571, 0.04      ,
       0.04      , 0.04571429, 0.04571429, 0.05142857, 0.05142857,
       0.05714286, 0.05714286, 0.06285714, 0.06285714, 0.07428571,
       0.07428571, 0.07428571, 0.08      , 0.08      , 0.09142857,
       0.09142857, 0.09714286, 0.09714286, 0.10857143, 0.10857143,
       0.12      , 0.12      , 0.14857143, 0.14857143, 0.16      ,
       0.16      , 0.17142857, 0.17142857, 0.17142857, 0.18857143,
       0.18857143, 0.21142857, 0.21142857, 0.22285714, 0.22857143,
       0.23428571, 0.23428571, 0.24      , 0.24      , 0.26285714,
       0.28571429, 0.32571429, 0.33714286, 0.34857143, 0.34857143,
       0.37714286, 0.37714286, 0.38285714, 0.38285714, 0.38857143,
       0.38857143, 0.4       , 0.4       , 0.43428571, 0.45714286,
       0.46857143, 0.62857143, 0.62857143, 0.63428571, 0.63428571,
       0.64      , 0.64      , 0.64571429, 0.65714286, 0.66285714,
       0.66285714, 0.68      , 0.69142857, 0.72      , 0.73142857,
       0.73714286, 0.74857143, 0.79428571, 0.82285714, 0.82285714,
       0.85142857, 0.86285714, 0.88      , 0.89142857, 1.        ])

np.array(fpr2)

array([1.        , 0.99428571, 0.99428571, 0.98857143, 0.97714286,
       0.96571429, 0.96571429, 0.95428571, 0.92      , 0.89714286,
       0.84571429, 0.78857143, 0.66285714, 0.63428571, 0.58857143,
       0.52      , 0.49142857, 0.48      , 0.40571429, 0.39428571,
       0.38285714, 0.37714286, 0.35428571, 0.33714286, 0.32      ,
       0.31428571, 0.29714286, 0.26285714, 0.25142857, 0.24571429,
       0.23428571, 0.22857143, 0.21142857, 0.21142857, 0.20571429,
       0.20571429, 0.2       , 0.19428571, 0.18857143, 0.18857143,
       0.17714286, 0.17714286, 0.17142857, 0.16      , 0.14857143,
       0.13714286, 0.13142857, 0.12      , 0.10857143, 0.10857143,
       0.10857143, 0.09714286, 0.09714286, 0.09714286, 0.08571429,
       0.08571429, 0.08      , 0.07428571, 0.07428571, 0.06857143,
       0.06285714, 0.05714286, 0.05714286, 0.05142857, 0.05142857,
       0.04571429, 0.04      , 0.04      , 0.04      , 0.04      ,
       0.03428571, 0.02857143, 0.02857143, 0.02857143, 0.02857143,
       0.02285714, 0.02285714, 0.02285714, 0.01714286, 0.01714286,
       0.01142857, 0.01142857, 0.01142857, 0.01142857, 0.01142857,
       0.01142857, 0.00571429, 0.00571429, 0.00571429, 0.00571429,
       0.00571429, 0.00571429, 0.00571429, 0.00571429, 0.        ,
       0.        , 0.        , 0.        , 0.        ])

In the above, I print two arrays/lists. The first one (fpr) is the sklearn array. The second one (fpr2) is the one I generated myself. The fpr2 contains many duplicate numbers one after the other, whereas fpr has much more unique numbers. My guess is, the combination of fpr and tpr, as in the sklearn values might have only unique points, whereas fpr2 and tpr2 from my code has several points repeating over multiple times.

What causes this? Looking at sklearn roc_curve method, it actually returns 3 values, and I so far only used 2 of those. The return values are for variables fpr, tpr, thr. The thr one is not yet used and is actually named thresholds in the sklearn docs. What are these thresholds?

thr

array([0.95088415, 0.94523744, 0.94433151, 0.88735906, 0.86820046,
       0.81286851, 0.80847662, 0.79241033, 0.78157711, 0.76377304,
       0.75724659, 0.72956179, 0.71977863, 0.70401182, 0.70160521,
       0.66863069, 0.66341944, 0.65174523, 0.65024897, 0.6342464 ,
       0.63001635, 0.61868955, 0.61479088, 0.60172207, 0.59225844,
       0.58177374, 0.58027421, 0.57885387, 0.56673186, 0.54986331,
       0.54937079, 0.54882292, 0.52469167, 0.51584338, 0.50426613,
       0.48014018, 0.47789982, 0.45232349, 0.44519259, 0.44203986,
       0.43892674, 0.43378224, 0.42655293, 0.42502826, 0.40283515,
       0.39572852, 0.34370988, 0.34049813, 0.32382773, 0.32099169,
       0.31275499, 0.31017684, 0.30953277, 0.30348576, 0.28008661,
       0.27736946, 0.24317994, 0.24314769, 0.23412169, 0.23370882,
       0.22219555, 0.22039886, 0.21542313, 0.21186501, 0.20940184,
       0.20734276, 0.19973878, 0.19716815, 0.18398295, 0.18369904,
       0.18369725, 0.14078944, 0.14070302, 0.14070218, 0.13874243,
       0.13799297, 0.13715783, 0.13399095, 0.13372175, 0.13345383,
       0.13057344, 0.1279474 , 0.1275107 , 0.1270535 , 0.12700496,
       0.12689617, 0.12680186, 0.11906236, 0.11834502, 0.11440003,
       0.10933941, 0.10860771, 0.10552423, 0.10468277, 0.0165529 ])

This array shows how they are quite different and there is no set value that is used to vary the threshold. Unlike my attempt at doing it in 1% unit changes, this list has much bigger and smaller changes in it. Let’s try my generator code with this same set of threshold values:

tpr3 = []
fpr3 = []
#do the calculations for sklearn thresholds
for threshold in thr:
    ss = np.where(pred_probs >= threshold)
    count_tp = 0
    count_fp = 0
    for i in ss[0]:
        if y_test.values[i] == 1:
            count_tp += 1
        else:
            count_fp += 1
    tpr3.append(count_tp/87.0)
    fpr3.append(count_fp/175.0)
    print("threshold="+str(threshold)+", count_tp="+str(count_tp)+", count_fp="+str(count_fp))

And with the TPR and FPR lists calculated for these thresholds, we can visualize them as well and compare against the sklearn coordinates:

plt.figure(figsize=(10, 6), dpi=80)
plt.plot(fpr3, tpr3, color='coral', label='ROC curve (area = %0.3f)' % auc(fpr3, tpr3))
plt.plot(fpr, tpr, color='blue', label='ROC curve (area = %0.3f)' % auc(fpr3, tpr3))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=14)
plt.ylabel('True Positive Rate', fontsize=14)
plt.title('ROC test')
plt.legend(loc="lower right")
plt.show()

Giving the figure:

roc5
FIGURE 5. Overlapping ROC curves with shared thresholds.

This time only one line is visible since the two are fully overlapping. So I would conclude that at least now I got also my ROC curve understanding validated. My guess is that sklearn does some nice calculations in the back for the ROC curve coordinates, to identify threshold points where there are visible changes and only providing those. While I would then use the ROC function that sklearn or whatever library I use (e.g., Spark) provides, this understanding I managed to build should help me make better use of their results.

Varying threshold vs accuracy

OK, just because this is an overly long post already, and I still cannot stop, one final piece of exploration. What happens to the accuracy as the threshold is varied? As 0.5 seems to be the default threshold, is it always the best? In cases where we want to minimize false positives or maximize true positives, maybe the threshold optimization is most obvious. But in case of just looking at the accuracy, what is the change? To see, I collected accuracy for every threshold value suggested by sklearn roc_curve(), and also for the exact 0.5 threshold (which was not in the roc_curve list but I assume is the classifier default):

tpr3 = []
fpr3 = []
accuracies = []
accs_ths = []

#do the calculations for sklearn thresholds
for threshold in thr:
    ss = np.where(pred_probs >= threshold)
    count_tp = sum(y_test.values[i] for i in ss[0])
    count_fp = len(ss[0]) - count_tp
    my_tpr = count_tp/87.0
    my_fpr = count_fp/175.0

    ssf = np.where(pred_probs < threshold)
    count_tf = sum(1 for i in ssf[0] if y_test.values[i] == 0)

    tpr3.append(my_tpr)
    fpr3.append(my_fpr)
    acc = (count_tp+count_tf)/y_test.count()
    accuracies.append(acc)
    accs_ths.append((acc, threshold))
    print("threshold="+str(threshold)+", tp="+str(count_tp)+", fp="+str(count_fp), ", tf="+str(count_tf), ", acc="+str(acc))

When the accuracy is graphed against the ROC curve, this looks like:

roc-acc
FIGURE 6. ROC thresholds vs Accuracy.

Eyeballing this, at the very left of FIGURE 6 the accuracy is equal to predicting all non-survivors correct (175/262=66.8%) but all survivors wrong. At the very right the accuracy is equal to predicting all survivors correct but all non-survivors wrong (87/262=33.2%). The sweet point is somewhere around 83% accuracy with about 65% true positives and maybe 8% false positives. I am just eyeballing this from the graph, so don’t take it to literally. To get a sorted list of accuracies by threshold:

plt.figure(figsize=(10, 6), dpi=80)
plt.plot(fpr3, tpr3, color='blue', label='ROC curve (area = %0.3f)' % auc(fpr3, tpr3))
plt.plot(fpr3, accuracies, color='coral', label='Accuracy' % auc(fpr3, tpr3))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate', fontsize=14)
plt.ylabel('True Positive Rate', fontsize=14)
plt.title('ROC test')
plt.legend(loc="lower right")
plt.show()

And the sorted list itself:

accs_ths.sort(reverse=True)
accs_ths

results in:

  accuracy,           threshold,          tp, tn,  total correct
 (0.8396946564885496, 0.5802742149381693, 58, 162, 220),
 (0.8396946564885496, 0.5667318576914661, 59, 161, 220),
 (0.8358778625954199, 0.5788538683111758, 58, 161, 219),
 (0.8358778625954199, 0.5493707899389861, 60, 159, 219),
 (0.8358778625954199, 0.5246916651561455, 61, 158, 219),
 (0.8320610687022901, 0.6342464019774986, 52, 166, 218),
 (0.8320610687022901, 0.6186895526474085, 53, 165, 218),
 (0.8320610687022901, 0.6017220679887019, 54, 164, 218),
 (0.8320610687022901, 0.5817737394188721, 56, 162, 218),
 (0.8320610687022901, 0.549863313475955, 59, 159, 218),
 (0.8320610687022901, 0.548822920002867, 60, 158, 218),
 (0.8320610687022901, 0.5042661255726298, 62, 156, 218),
 (0.8320610687022901, 0.5000000000000000, 62, 156, 218),
 (0.8282442748091603, 0.6300163468361882, 52, 165, 217),
 (0.8282442748091603, 0.6147908791071057, 53, 164, 217),
 (0.8282442748091603, 0.5158433833082536, 61, 156, 217),
 (0.8282442748091603, 0.4778998233729275, 63, 154, 217),
 (0.8244274809160306, 0.5922584447936917, 54, 162, 216),
 (0.8244274809160306, 0.4801401821434857, 62, 154, 216),
 ...

Here a number of thresholds actually give a higher score than the 0.5 one I used as the reference for default classifier. The highest scoring thresholds being 0.5802 (58.02%) and 0.5667 (56.67%). The 0.5 threshold gets 218 predictions correct, with an accuracy of 0.8320 (matching the one from accuracy_score() at the beginning of this post). Looking at the thresholds and accuracies more generally, the ones slightly above the 0.5 threshold generally seem to do a bit better. Threshold over 0.5 but less than 0.6 seems to score best here. But there is always an exception, of course (0.515 is lower). Overall, the difference between the values is small but interesting. Is it overfitting the test data when I change the threshold and evaluate against the test data? If so, does the same consideration apply for optimizing for precision/recall using the ROC curve? Is there some other reason why the classifier would use 0.5 threshold which is not optimal? Well, my guess is, it might be a minor artefact of over/underfitting. No idea, really.

Area Under the Curve (AUC)

Before I forget. The title of the post was ROC/AUC. So what is area under the curve (AUC)? The curve is the ROC curve, the AUC is the area under the ROC curve. See FIGURE 7, where I used the epic Aseprite to paint the area under the FIGURE 5 ROC curve. Brilliant. AUC refers to the area covered by this part I colored, under the ROC curve. The value of AUC is calculated as the fraction of the overall area. So consider the whole box of FIGURE 7 as summing up to 1 (TPR x FPR = 1 x 1 = 1), and in this case AUC is the part marked in the figure as area = 0.877. AUC is calculated simply by calculating the size of the area under the curve vs the full box (at full size 1).

roc5_aucFIGURE 7. Area Under the Curve.

I believe the rationale is, the more the colored area covers, or the bigger the AUC value, the better overall performance one could expect from the classifier. As the area grows bigger, the more the classifier is able to separate true positives from false positives at different threshold values.

Uses for AUC/ROC

To me, AUC seems most useful in evaluating and comparing different machine learning algorithms. Which is also what I recall seeing it being used for. In such cases, the higher the AUC, the better overall performance you would get from the algorithm. You can then boast in your paper about having a higher overall performance metric than some other random choice.

ROC I see as mostly useful for providing extra information and an overview to help evaluate options for TPR vs FPR in different threshold configurations for a classifier. The choice and interpretation depends on the use case, of course. The usual use case in this domain is doing a test for cancer. You want to maximize your TPR so you miss out on fewer people with cancer. You can then look for your optimal location of the ROC curve to climb onto, with regards to the cost vs possibly missed cases. So you would want a high TPR there, as far as you can afford I guess. You might have a higher FPR but such is the tradeoff. In this case, the threshold would likely be lower rather than higher.

It seems harder to find examples of optimizing for low FPR with the tradeof being lower TPR as well. Perhaps one could look onto the Kaggle competitions, and pick, for example, the topic of targeted advertising. For lower FPR, you could set a higher threshold rather than lower. But your usecase could be pretty much anything, I guess.

Like I said, recently I have been looking into Spark and some algorithms seem to only give out the estimation as an AUC metric. Which is a bit odd but I guess there is some clever reason for that. I have not looked too deep into that aspect of Spark yet, probably I am just missing the right magical invocations to get all the other scores.

Some special cases are also discussed, for example, in the Fawcett paper. At some point in the curve, one classifier might have a higher point in ROC space even if having overall lower AUC value. So that some threshold would have higher value on one classifier, while other thresholds lower for the same (classifier) pair. Similarly, AUC can be higher overall but a specific classifier still better for a specific use case. Sounds a bit theoretical, but interesting.

Why is it called Receiver Operator Characteristic (ROC)?

You can easily find information on the ROC curve history referencing their origins in world war 2 and radar signal detection theory. Wikipedia ROC article has some start of this history, but is quite short. As usual, Stackexchange gives a good reference. The 1953 article seems paywalled, and I do not have access. This short description describes it as being used to measure the ability of a radio receiver to produce quality readings and enabling the operator to distinguish between false positives and true positives.

Elsewhere I read it originated from Pearl Harbour attack during WW2, where they tried to analyze why the radar operators failed to see incoming attack aircraft. What do I know. The internet is full of one-liner descriptions on this topic, all circling around the same definitions but never going into sufficient details.

Conclusions

Well the conclusion is that this was way too long post on a simple term. And trying to read it, it is not even that clearly written. But that is what I got. It helped me think about the topic and clarify and verify what it really is. Good luck.

Predicting issue categories on Github

Practical examples of applying machine learning seem to be a bit difficult to find. So I tried to create one for a presentation I was doing on testing and data analytics. I made a review of works in the area, and just chose one for illustrate. This one tries to predict a target category to assign for an issue report. I used ARM mBed OS as a test target since it has issues available on Github and there were some people who work with it attending the presentation.

This demo “service” I created works by first training a predictive model based on a set of previous issue reports. I downloaded the reports from the issue repository. The amount of data available there was so small, I just downloaded the issues manually using the Github API that let me download the data for 100 issues at once. Automating the download should be pretty easy if needed. The amount of data is small, and there are a large number of categories to predict, so not the best for results, but serves as an example to illustrate the concept.

And no, there is no deep learning involved here, so not quite that trendy. I don’t think it is all that necessary for this purpose or this size of data. But could work better of course, if you do try, post the code so we can play as well.

The Github issues API allows me to download the issues in batches. For example, to download page 12 of closed issues, with 100 issues per page, the URL to request is https://api.github.com/repos/ARMmbed/mbed-os/issues?state=closed&page=12&per_page=100. The API seems to cut it down to 100 even if using bigger values than 100. Or I just didn’t quite use it right, whichever. The API docs describe the parameters quite clearly, I downloaded open and closed issues separately, even if I did not use the separation in any meaningful way in the end.

The code here is all in Python. The final classifier/prediction services code is available on my Github repository.

First build a set of stopwords to do some cleaning on the issue descriptions:

	stop_words = set(stopwords.words('english'))
	stop_words = stop_words.union(set(punctuation))
	stop_words.update(["''", "--", "**"])

The above code uses the common NLTK stopwords, a set of punctuation symbols, and a few commonly occurring symbol combinations I found in the data. Since later on I clean it up with another regular expression, probably just the NLTK stopwords would suffice here as well..

To preprocess the issue descriptions before applying machine learning algorightms:

def preprocess_report(body_o):
	#clean issue body text. leave only alphabetical and numerical characters and some specials such as +.,:/\
	body = re.sub('[^A-Za-z0-9 /\\\_+.,:\n]+', '', body_o)
	# replace URL separators with space so the parts of the url become separate words
	body = re.sub('[/\\\]', ' ', body)
	# finally lemmatize all words for the analysis
	lemmatizer = WordNetLemmatizer()
	# text tokens are basis for the features
	text_tokens = [lemmatizer.lemmatize(word) for word in word_tokenize(body.lower()) if word not in stop_words]
	return text_tokens

Above code is intended to remove all but standard alphanumeric characters from the text, remove stop words, and tokenize the remaining text into separate words. It also splits URL’s into parts as separate words. The lemmatization changes known words into their baseforms (e.g., “car” and “cars” become “car”). This just makes it easier for the machine learning algorithm to match words together. Another option is stemming, but lemmatization produces more human-friendly words so I use that.

I stored the downloaded issues as JSON files (as Github API gives) in the data directory. To read all these filenames for processing:

#names of files containing closed and open issues (at time of download)
closed_files = glob.glob("data/*-closed-*")
open_files = glob.glob("data/*-closed-*")

To process those files, I need to pick only the ones with an assigned “component” value. This is what is the training target label. The algorithm is trained to predict this “component” value from the issue description, so without this label, the piece of data is not useful for training.

def process_files(files):
	'''
	process the given set of files by collecting issue body text and labels.
	also cleans and lemmatizes the body text

	:param files: names of files to process
	:return: nothing
	'''
	global total

	for filename in files:
		with open(filename, encoding="utf-8") as json_data:
			print(filename)
			all_issues = json.load(json_data)
			for issue in all_issues:
				labels = issue["labels"]
				for label in labels:
					if label["name"].startswith("component:"):
						name = label["name"]
						body_o = issue["body"]
						text_tokens = preprocess_report(body_o)
						all_text_tokens.append((text_tokens))
						#component_labels are prediction targets
						component_labels.append(name)
						total += 1

print("total: ", total)
						

There is a limited number of such labeled data items, as many of the downloaded issues do not have this label assigned. The print at the end of the above code shows the total number of items with the “component” label given, and the number in this dataset is 1078.

Besides removing stop-words and otherwise cleaning up the documents for NLP, combining words sometimes makes sense. Pairs, triplets, and so on are sometimes meaningful. Typical example is words “new” and “york” in a document, versus “new york”. This would be an example of a bi-gram, combining two words into “new_york”. To do this, I use the gensim package:

import gensim

#https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/
# Build the bigram and trigram models
bigram = gensim.models.Phrases(all_text_tokens, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[all_text_tokens], threshold=100)

# Faster way to get a sentence clubbed as a trigram/bigram
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)

#just to see it works
print(trigram_mod[bigram_mod[all_text_tokens[4]]])

#transform identified word pairs and triplets into bigrams and trigrams
text_tokens = [trigram_mod[bigram_mod[text]] for text in all_text_tokens]

#build whole documents from text tokens. some algorithms work on documents not tokens.
texts = [" ".join(tokens) for tokens in text_tokens]

The above code uses thresholds and minimum co-occurrence counts to avoid combining every possible word with every other possible word. So only word-pairs and triplets that commonly are found to occur together are used (replaced) in the document.

Use the Python data processing library Pandas to turn it into suitable format for the machine learning algorithms:

from pandas import DataFrame

df = DataFrame()

df["texts"] = texts
df["text_tokens"] = text_tokens
df["component"] = component_labels

print(df.head(2))

First to have a look at the data:

#how many issues are there in our data for all the target labels, assigned component counts
value_counts = df["component"].value_counts()
#print how many times each component/target label exists in the training data
print(value_counts)
#remove all targets for which we have less than 10 training samples.
#K-fold validation with 5 splits requires min 5 to have 1 in each split. This makes it 2, which is still tiny but at least it sorta runs
indices = df["component"].isin(value_counts[value_counts > 9].index)
#this is the weird syntax i never remember, them python tricks. i think it slices the dataframe to remove the items not in "indices" list
df = df.loc[indices, :]

The above code actually already does a bit more. It also filters the dataset to remove the rows with component values that only have less than 10 items. So this is the unfiltered list:

component: tools              162
component: hal                128
component: export             124
component: networking         118
component: drivers            110
component: rtos                88
component: filesystem          80
component: tls                 78
component: docs                60
component: usb                 54
component: ble                 38
component: events              14
component: cmsis               10
component: stdlib               4
component: rpc                  4
component: uvisor               2
component: greentea-client      2
component: compiler             2

And after filtering, the last four rows will have been removed. So in the end, the dataset will not have any rows with labelsl “rpc”, “uvisor”, “greentea-client”, or “compiler”. This is because I will later use stratified 5-fold cross-validation and this requires a minimum of 5 items of each. Filtering with minimum of 10 instances for a label, it is at least possible to have 2 of the least common “component” in each fold.

In a more realistic case, much more data would be needed to cover all categories, and I would also look at possibly combining some of the different categories. And rebuilding the model every now and then, depending on how much effort it is, how much new data comes in, etc.

To use the “component” values as target labels for machine learning, they need to be numerical (integers). This does the transformation:

from sklearn.preprocessing import LabelEncoder

# encode class values as integers
encoder = LabelEncoder()
encoded_label = encoder.fit_transform(df.component)

Just to see how the mapping of integer id’s to labels after label encoding looks:

unique, counts = np.unique(encoded_label, return_counts=True)
print(unique) #the set of unique encoded labels
print(counts) #the number of instances for each label

The result (first line = id, second line = number of items):

[ 0  1  2  3  4  5  6  7  8  9 10 11 12]
[ 38  10  60 110  14 124  80 128 118  88  78 162  54]

Mapping the labels to integers:

#which textual label/component name matches which integer label
le_name_mapping = dict(zip(encoder.classes_, encoder.transform(encoder.classes_)))
print(le_name_mapping)

#which integer matches which textual label/component name
le_id_mapping = dict(zip(encoder.transform(encoder.classes_), encoder.classes_))
print(le_id_mapping)

So the first is to print “label: id” pairs, and the second to print “id: label” pairs. The first one looks like this:

'component: ble': 0, 
'component: cmsis': 1, 
'component: docs': 2, 
'component: drivers': 3, 
'component: events': 4, 
'component: export': 5, 
'component: filesystem': 6, 
'component: hal': 7, 
'component: networking': 8, 
'component: rtos': 9, 
'component: tls': 10, 
'component: tools': 11, 
'component: usb': 12

Now, to turn the text into suitable input for a machine learning algorithm, I transform the documents into their TF-IDF presentation. Well, if you go all deep learning with LSTM and the like, this may not be necessary. But don’t take my word for it, I am still trying to figure some of that out.

TF-IDF stands for term frequency (TF) – inverse document frequency (IDF). For example, if the word “bob” appears often in a document, it has a high term frequency for that document. Generally, one might consider such a word to describe that document well (or the concepts in the document). However, if the same word also appears commonly in all the documents (in the “corpus”), it is not really specific to that document, and not very representative of that document vs all other documents in the corpus. So IDF is used to modify the TF so that words that appear often in a document but less often in others in the corpus get a higher weight. And if the word appears often across many documents, it gets a lower weight. This is TF-IDF.

Traditional machine learning approaches also require a more fixed size set of input features. Since documents are of varying length, this can be a bit of an issue. Well, I believe some deep learning models also require this (e.g., CNN), while others less so (e.g., sequential models such as LSTM). Digressing. TF-IDF also (as far as I understand) results in a fixed length feature vector for all documents. Or read this on Stack Overflow and make your own mind up.

Anyway, to my understanding, the feature space (set of all features) after TF-IDF processing becomes the set of all unique words across all documents. Each of these is given a TF-IDF score for each document. For the words that do not exist in a document, the score is 0. And most documents don’t have all words in them, so this results in a very “sparse matrix”, where the zeroes are not really stored. That’s how you can actually process some reasonable sized set of documents in memory.

So, in any case, to convert the documents to TF-IDF presentation:

from sklearn.feature_extraction.text import TfidfVectorizer

vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5)

#transfor all documents into TFIDF vectors.
#TF-IDF vectors are all same length, same word at same index, value is its TFIDF for that word in that document
features_transformed = vectorizer.fit_transform(features)

Above code fits the vectorizer to the corpus and then transforms all the documents to their TF-IDF representations. To my understanding (from SO), the fit part counts the word occurrences in the corpus, and the transform part uses these overall counts to transform each document into TF-IDF.

It is possible also to print out all the words the TF-IDF found in the corpus:

#the TFIDF feature names is a long list of all unique words found
print(vectorizer.get_feature_names())
feature_names = np.array(vectorizer.get_feature_names())
print(len(feature_names))

Now to train a classifier to predict the component based on a given document:

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score

kfold = StratifiedKFold(n_splits=5) #5-fold cross validation

#the classifier to use, the parameters are selected based on a set i tried before
clf = RandomForestClassifier(n_estimators=50, min_samples_leaf=1, min_samples_split=5)

results = cross_val_score(clf, features_transformed, encoded_label, cv=kfold)

print("Baseline: %.2f%% (%.2f%%)" % (results.mean() * 100, results.std() * 100))

#fit the classifier on the TFIDF transformed word features, train it to predict the component
clf.fit(features_transformed, encoded_label)
probabilities = clf.predict_proba(features_transformed[0])
print(probabilities)

In the above I am using RandomForest classifier, with a set of parameters previously tuned. I am also using 5-fold cross validation, meaning the data is split into 5 different parts. The parts are “stratified”, meaning each fold has about the same percentage of each target label as the original set. This is why I removed the labels with less that 10 instances in the beginning, to have at least 2 for each class. Which is till super-tiny but thats what this example is about.

The last part of the code above also runs a prediction on one of the transformed documents just to try it out.

Now, to run predictions on previously unseen documents:

import requests

def predict_component(issue):
	'''
	use this to get a set of predictions for a given issue.

	:param issue: issue id from github.
	:return: list of tuples (component name, probability)
	'''
	#first load text for the given issue from github
	url = "https://api.github.com/repos/ARMmbed/mbed-os/issues/" + str(issue)
	r = requests.get(url)
	url_json = json.loads(r.content)
	print(url_json)
	#process the loaded issue data to format matching what the classifier is trained on
	issue_tokens = preprocess_report(url_json["body"])
	issue_tokens = trigram_mod[bigram_mod[issue_tokens]]
	issue_text = " ".join(issue_tokens)
	features_transformed = vectorizer.transform([issue_text])
	#and predict the probability of each component type
	probabilities = clf.predict_proba(features_transformed)
	result = []
	for idx in range(probabilities.shape[1]):
		name = le_id_mapping[idx]
		prob = (probabilities[0, idx]*100)
		prob_str = "%.2f%%" % prob
		print(name, ":", prob_str)
		result.append((name, prob_str))
	return result

This code takes as parameter an issue number for the ARM mBed Github repo. Downloads the issue data, preprocesses it similar to the training data (clean, tokenize, lemmatize, TF-IDF). This is then used as a set of features to predict the component, based on the model trained earlier.

The “predict_component” method/function can then be called from elsewhere. In my case, I made a simple web page to call it. As noted in the beginning of this post, you can find that webserver code, as well as all the code above on my Github repository.

That’s pretty much it. Not very complicated to put some Python lines one after another, but knowing which lines and in which order is perhaps what takes the time to learn. Having someone else around to do it for you if you are a domain expert (e.g., testing, software engineering or similar in this case) is handy, but it can also be useful to have some idea of what happens, or how the algorithms in general work.

Something I left out in all the above was the code to try out different classifiers and their parameters. So I will just put it below for reference.

First a few helper methods:

def top_tfidf_feats(row, features, top_n=25):
	''' Get top n tfidf values in row and return them with their corresponding feature names.'''
	topn_ids = np.argsort(row)[::-1][:top_n]
	top_feats = [(features[i], row[i]) for i in topn_ids]
	df = pd.DataFrame(top_feats)
	df.columns = ['feature', 'tfidf']
	return df

#this prints it for the first document in the set
arr = features_test_transformed[0].toarray()
top_tfidf_feats(arr[0], feature_names)

def show_most_informative_features(vectorizer, clf, n=20):
	feature_names = vectorizer.get_feature_names()
	coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
	top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
	for (coef_1, fn_1), (coef_2, fn_2) in top:
		print ("\t%.4f\t%-15s\t\t%.4f\t%-15s" % (coef_1, fn_1, coef_2, fn_2))

In above code, “top_tfidf_feats” prints the top words with highest TF-IDF score for a document. So in a sense, it prints the words that TF-IDF has determined to be most uniquely representing that document.

The “show_most_informative_features” prints the top features that a given classifier has determined to be most descriptive/informative for distinguishing target labels. This only works for certain classifiers, which have such simple co-efficients (feature weights). Such as multinomial naive-bayes (MultinomialNB below).

Here is the code to actually try out the classifiers then:

from sklearn.naive_bayes import MultinomialNB

clf = MultinomialNB()
clf.fit(features_train_transformed, labels_train)

from sklearn.metrics import accuracy_score

y_pred = clf.predict(features_test_transformed)
y_true = labels_test
acc_score = accuracy_score(y_true, y_pred)
print("MNB accuracy:"+str(acc_score))

show_most_informative_features(vectorizer, clf)

#try it out on a single document
probabilities = clf.predict_proba(features_test_transformed[0])
print(probabilities)

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score

#set of parameters to try
estimators = [10, 20, 30, 40, 50]
min_splits = [5, 10, 20, 30, 40, 50]
min_leafs = [1, 2, 5, 10, 20, 30]

kfold = StratifiedKFold(n_splits=5) #5-fold cross validation

best_acc = 0.0
best_rf = None
for estimator in estimators:
	for min_split in min_splits:
		for min_leaf in min_leafs:
			print("estimators=", estimator, "min_split=", min_split, " min_leaf=", min_leaf)

			clf = RandomForestClassifier(n_estimators=estimator, min_samples_leaf=min_leaf, min_samples_split=min_split)

			results = cross_val_score(clf, features_transformed, encoded_label, cv=kfold)

			print("Baseline: %.2f%% (%.2f%%)" % (results.mean() * 100, results.std() * 100))

			if results.mean() > best_acc:
				best_acc = results.mean()
				best_rf = clf
				print("found better:", best_acc, ", ", best_rf)

print("best classifier:")
print(best_rf)

best_acc = 0
best_rf = None
for estimator in estimators:
	for min_split in min_splits:
		for min_leaf in min_leafs:
			print("estimators=", estimator, "min_split=", min_split, " min_leaf=", min_leaf)

			clf = RandomForestClassifier(n_estimators=estimator, min_samples_leaf=min_leaf, min_samples_split=min_split)
			clf.fit(features_train_transformed, labels_train)

			pred = clf.predict(features_test_transformed)

			accuracy = accuracy_score(labels_test, pred)

			print(accuracy)

			if accuracy > best_acc:
				best_acc = accuracy
				best_rf = clf
				print("found better:", best_acc, ", ", best_rf)
	

In the code above, I use loops to run through the parameters. There is also something called GridSearch in the Python libraries, as well as RandomSearch (for cases where trying all combos is expensive). But I prefer the ability to control the loops, print out whatever I like and all that.

The above code also shows two ways I tried to train/evaluate the RandomForest parameters. First is with k-fold, latter with single test-train split. I picked MultinomialNB and RandomForest because some internet searching gave me the impression they might work reasonably well for unbalanced class sets such as this one. Of course the final idea is always to try and see what works.. This worked quite fine for me. Or so it seems, machine learning seems to be always about iterating stuffs and learning and updating as you go. More data could change this all, or maybe finding some mistake, or having more domain or analytics knowledge, finding mismatching results, or anything really.

What the unbalanced refers to is the number of instances of different components in this dataset, some “components” have many bug repots, while others much less. For many learning algorithms this seems to be an issue. Some searches indicated RandomForest should be fairly robust for this type so this is also one reason I used it.

Running the above code to experiment with the parameters also produced some slightly concerning results. The accuracy for the classifier ranged from 30% to 95% with smallish parameters changes. I would guess that also speaks for the small dataset causing potential issues. Also re-running the same code would give different classifications for new (unseen) instances. Which is what you might expect when I am not setting the randomization seed. But then I would also expect the accuracy to vary somewhat, which it didn’t. So just don’t take this as more than an example of how you might apply ML for some SW testing related tasks. Take it to highlight the need to always learn more, try more things, and get a decent amount of data, evolve models constantly, etc. And post some comments on all the things you think are wrong in this post/code so we can verify the approach of learning and updating all the time :).

In any case, I hope the example is useful for giving an idea of one way how machine learning could be applied in software testing related aspects. Now write me some nice LSTM or whatever is the latest trend in deep learning models, figure out any issues in my code, or whatever, and post some comments. Cheers.

Finnish Topic Modelling

Previously I wrote about a few experiments I ran with topic-modelling. I briefly glossed over having some results for a set of Finnish text as an example of a smaller dataset. This is a bit deeper look into that..

I use two datasets, the Finnish wikipedia dump, and the city of Oulu board minutes. Same ones I used before. Previously I covered topic modelling more generally, so I won’t go into too much detail here. To summarize, topic modelling algorithms (of which LDA or Latent Dirilect Allocation is used here) find sets of words with different distributions over sets of documents. These are then called the “topics” discussed in those documents.

This post looks at how to use topic models for a different language (besides English) and what could one maybe do with the results.

Lemmatize (turn words into baseforms before use) or not? I choose to lemmatize for topic modelling. This seems to be the general consensus when looking up info on topic modelling, and in my experience it just gives better results as the same word appears only once. I covered POS tagging previously, and I believe it would be useful to apply here as well, but I don’t. Mostly because it is not needed to test these concepts, and I find the results are good enough without adding POS tagging to the mix (which has its issues as I discussed before). Simplicity is nice.

I used the Python Gensim package for building the topic models. As input, I used the Finnish Wikipedia text and the city of Oulu board minutes texts. I used my existing text extractor and lemmatizer for these (to get the raw text out of the HTML pages and PDF docs, and to baseform them, as discussed in my previous posts). I dumped the lemmatized raw text into files using slight modifications of my previous Java code and the read the docs from those files as input to Gensim in a Python script.

I started with the Finnish Wikipedia dump, using Gensim to provide 50 topics, with 1 pass over the corpus. First 10 topics that I got:

  • topic0=focus[19565] var[8893] liivi[7391] luku[6072] html[5451] murre[3868] verkkoversio[3657] alku[3313] joten[2734] http[2685]
  • topic1=viro[63337] substantiivi[20786] gen[19396] part[14778] taivutus[13692] tyyppi[6592] t√§ysi[5804] taivutustyyppi[5356] liite[4270] rakenne[3227]
  • topic2=isku[27195] pieni[10315] tms[7445] aine[5807] v√§ri[5716] raha[4629] suuri[4383] helppo[4324] saattaa[4044] heprea[3129]
  • topic3=suomi[89106] suku[84950] substantiivi[70654] pudottaa[59703] kasvi[46085] k√§√§nn√∂s[37875] luokka[35566] sana[33868] kieli[32850] taivutusmuoto[32067]
  • topic4=ohjaus[129425] white[9304] off[8670] black[6825] red[5066] sotilas[4893] fraasi[4835] yellow[3943] perinteinen[3744] flycatcher[3735]
  • topic5=lati[48738] eesti[25987] www[17839] http[17073] keele[15733] eki[12421] l√§hde[11306] dict[11104] s√Ķnaraamat[10648] tallinn[8504]
  • topic6=suomi[534914] k√§√§nn√∂s[292690] substantiivi[273243] aihe[256126] muualla[254788] sana[194213] liittyv√§[193298] etymologi[164158] viite[104417] kieli[102489]
  • topic7=italia[66367] substantiivi[52038] japani[27988] inarinsaame[9464] kohta[7433] yhteys[7071] vaatekappale[5553] rinnakkaismuoto[5469] taas[4986] voimakas[3912]
  • topic8=sana[548232] liittyv√§[493888] substantiivi[298421] ruotsi[164717] synonyymi[118244] alas[75430] etymologi[64170] liikuttaa[38058] johdos[25603] yhdyssana[24943]
  • topic9=juuri[3794] des[3209] jumala[1799] tadŇĺikki[1686] tuntea[1639] tekij√§[1526] tulo[1523] mitta[1337] jatkuva[1329] levy[1197]
  • topic10=t√∂rm√§t√§[22942] user[2374] sur[1664] self[1643] hallita[1447] voittaa[1243] piste[1178] data[1118] harjoittaa[939] jstak[886]

The format of the topic list I used here is “topicX=word1[count] word2[count]”, where X is the number of the topic, word1 is the first word in the topic, word2 the second, and so on. The [count] is how many times the word was associated with the topic in different documents. Consider it the strength, weight, or whatever of the word in the topic.

So just a few notes on the above topic list:

  • topic0 = mostly website related terms, interleaved with a few odd ones. Examples of odd ones; “liivi” = vest, “luku” = number/chapter (POS tagging would help differentiate), “murre” = dialect.
  • topic1 = mostly Finnish language related terms. “viro” = estonia = slightly odd to have here. It is the closest related language to Finnish but still..
  • topic3 = another Finnish language reated topic. Odd one here is “kasvi” = plant. Generally this seems to be more related to words and their forms, where as topic1 maybe more about structure and relations.
  • topic5 = estonia related

Overall, I think this would improve given more passes over the corpus to train the model. This would give the algorithm more time and data to refine the model. I only ran it with one pass here since the training for more topics and with more passes started taking days and I did not have the resources to go there.

My guess is also that with more data and more broader concepts (Wikipedia covering pretty much every topic there is..) you would also need more topics that the 50 I used here. However, I had to limit the size due to time and resource constraints. Gensim probably also has more advanced tuning options (e..g, parallel runs) that would benefit the speed. So I tried a few more sizes and passes with the smaller Oulu city board dataset, as it was faster to run.

Some topics for the city of Oulu board minutes, run for 20 topics and 20 passes over the training data:

  • topic0=oulu[2096] kaupunki[1383] kaupunginhallitus[1261] 2013[854] p√§iv√§m√§√§r√§[575] vuosi[446] p√§√§t√∂sesitys[423] j√§sen[405] hallitus[391] tieto[387]
  • topic1=kunta[52] palvelu[46] asiakaspalvelu[41] yhteinen[38] viranomainen[25] laki[24] valtio[22] my√∂s[20] asiakaspalvelupiste[19] kaupallinen[17]
  • topic2=oulu[126] palvelu[113] kaupunki[113] koulu[89] tukea[87] edist√§√§[71] vuosi[71] osa[64] nuori[63] toiminta[61]
  • topic3=tontti[490] kaupunki[460] oulu[339] asemakaava[249] rakennus[241] kaupunginhallitus[234] p√§iv√§m√§√§r√§[212] yhdyskuntalautakunta[206] muutos[191] alue[179]
  • topic5=kaupunginhallitus[1210] p√§√§t√∂s[1074] j√§sen[861] oulu[811] kaupunki[681] p√∂yt√§kirja[653] klo[429] p√§iv√§m√§√§r√§[409] oikaisuvaatimus[404] matti[316]
  • topic6=000[71] 2012[28] oulu[22] muu[20] tilikausi[16] vuosi[16] yhde[15] kunta[14] 2011[13] 00000[13]
  • topic8=alue[228] asemakaava[96] rakentaa[73] tulla[58] oleva[56] rakennus[55] merkitt√§v√§[53] kortteli[53] oulunsalo[50] nykyinen[48]
  • topic10=asiakirjat.ouka.fi[15107] ktwebbin[15105] 2016[7773] eet[7570] pk_asil_tweb.htm?[7551] ktwebscr[7550] dbisa.dll[7550] url=http[7540] doctype[7540] =3&docid[7540]
  • topic11=yhti√∂[31] osake[18] osakas[11] energia[10] hallitus[10] 18.11.2013[8] liite[7] lomautus[6] s√§hk√∂[6] osakassopimus[5]
  • topic12=13.05.2013[13] perlacon[8] kuntatalousfoorumi[8] =1418[6] meeting_date=21.3.2013[6] =2070[6] meeting_date=28.5.2013[6] =11358[5] meeting_date=3.10.2016[5] -31.8.2015[4]
  • topic13=001[19] oulu[11] 002[5] kaupunki[4] sivu[3] ÔŅĹÔŅĹÔŅĹ[3] palvelu[3] the[3] asua[2] and[2]

Some notes on the topics above:

  • The word “oulu” repeats in most of the topics. This is quite natural as all the documents are from the board of the city of Oulu. Depending on the use case for the topics, it might be useful to add this word to the list of words to be removed in the pre-cleaning phase for the documents before running the topic modelling algorithm. Or it might be useful information, along with the weight of the word inside the topic. Depends.
  • topic0 = generally about the board structure. For example, “kaupunki”=city, “kaupunginhallitus”=city board, “p√§iv√§m√§√§r√§”=date, “p√§√§t√∂sesitys”=proposal for decision.
  • topic1 = Mostly city service related words. For example, “kunta” = county, “palvelu” = service, “asiakaspalvelu” = customer service, “my√∂s” = also, so something to add to the cleaners again.
  • topic2 = School related. For example, “koulu” = school, “tukea” = support, … Sharing again common words such as “kaupunki” = city, which may also be considered for removal or not depending on the case.
  • topic3 = City area planning related. For example, “tontti” = plot of land, “asemakaava” = zoning plan, …
  • In general quite good and focused topics here, so I think in general quite a good result. Some exceptions to consider:
  • topic10 = mostly garbage related to HTML formatting and website link structures. still a real topic of course, so nicely identified.. I think something to consider to add to the cleaning list for pre-processing.
  • topic12 = Seems related to some city finance related consultation (perlacon seems to be such as company) and associated event (the forum). With a bunch of meeting dates.
  • topic13 = unclear garbage
  • So in general, I guess reasonably good results but in real applications, several iterations of fine-tuning the words, the topic modelling algorithm parameters, etc. based on the results would be very useful.

So that was the city minutes topics for a smaller set of topics and more passes. What does it look for 100 topics, and how does the number of passes over the corpus affect the larger size? more passes should give the algorithm more time to refine the topics, but smaller datasets might not have so many good topics..

For 100 topics, 1 passes, 10 first topics:

  • topic0=oulu[55] kaupunki[22] 000[20] sivu[14] palvelu[14] alue[13] vuosi[13] muu[11] uusi[11] tavoite[9]
  • topic1=kaupunki[18] oulu[17] j√§sen[15] 000[10] kaupunginhallitus[7] kaupunginjohtaja[6] klo[6] muu[5] vuosi[5] takaus[4]
  • topic2=hallitus[158] oulu[151] 25.03.2013[135] kaupunginhallitus[112] j√§sen[105] varsinainen[82] tilintarkastaja[79] kaupunki[75] valita[70] yhti√∂kokousedustaja[50]
  • topic3=kuntalis√§[19] oulu[16] palkkatuki[15] kaupunki[14] tervahovi[13] henkil√∂[12] tukea[12] yritys[10] kaupunginhallitus[10] ty√∂t√∂n[9]
  • topic4=koulu[37] oulu[7] sahantie[5] 000[5] √§√§nestyspaikka[4] maikkulan[4] kaupunki[4] kirjasto[4] monitoimitalo[3] kello[3]
  • topic5=oulu[338] kaupunki[204] euro[154] kaupunginhallitus[143] 2013[105] vuosi[96] milj[82] palvelu[77] kunta[71] uusi[64]
  • topic6=000[8] oulu[7] kaupunki[4] vuosi[3] 2012[3] muu[3] kunta[2] muutos[2] 2013[2] sivu[1]
  • topic7=000[5] 26.03.2013[4] oulu[3] 2012[3] kunta[2] vuosi[2] kirjastoj√§rjestelm√§[2] muu[1] kaupunki[1] muutos[1]
  • topic8=oulu[471] kaupunki[268] kaupunginhallitus[227] 2013[137] p√§iv√§m√§√§r√§[97] p√§√§t√∂s[93] vuosi[71] tieto[67] 000[66] p√§√§t√∂sesitys[64]
  • topic9=oulu[5] lomautus[3] 000[3] kaupunki[2] s√§√§st√∂toimenpidevapaa[1] vuosi[1] kunta[1] kaupunginhallitus[1] sivu[1] henkil√∂st√∂[1]
  • topic10=oulu[123] kaupunki[82] alue[63] sivu[43] rakennus[42] asemakaava[39] vuosi[38] tontti[38] 2013[35] osa[35]

Without going too much into translating every word, I would say these results are too spread out, so from this, for this dataset, it seems a smaller set of topics would do better. This also seems to be visible in the word counts/strengths in the [square brackets]. The topics with small weights also seem pretty poor topics, while the ones with bigger weights look better (just my opinion of course :)). Maybe something to consider when trying to explore the number of topics etc.

And the same run, this time with 20 passes over the corpus (100 topics and 10 first ones shown):

  • topic0=oulu[138] kaupunki[128] palvelu[123] toiminta[92] kehitt√§√§[73] my√∂s[72] tavoite[62] osa[55] vuosi[50] toteuttaa[44]
  • topic1=-seurantatieto[0] 2008-2010[0] =30065[0] =170189[0] =257121[0] =38760[0] =13408[0] oulu[0] 000[0] kaupunki[0]
  • topic2=harmaa[2] tilaajavastuulaki[1] tilaajavastuu.fi[1] torjunta[1] -palvelu[1] talous[0] harmaantalous[0] -30.4.2014[0] hankintayksikk√∂[0] kilpailu[0]
  • topic3=juhlavuosi[14] 15.45[11] perussopimus[9] reilu[7] kauppa[6] juhlatoimikunta[6] ty√∂paja[6] 24.2.2014[6] 18.48[5] tapahtumatuki[4]
  • topic4=kokous[762] kaupunginhallitus[591] p√§√§t√∂s[537] p√∂yt√§kirja[536] ty√∂j√§rjestys[362] hyv√§ksy√§[362] tarkastaja[360] esityslista[239] valin[188] p√§√§t√∂svaltaisuus[185]
  • topic5=koulu[130] sivistys-[35] suuralue[28] perusopetus[25] tilakeskus[24] kulttuurilautakunta[22] j√§rjest√§√§[22] korvensuora[18] p√§iv√§kota[17] p√§iv√§koti[17]
  • topic6=piste[24] hanke[16] toimittaja[12] hankesuunnitelma[12] tila[12] toteuttaa[11] hiukkavaara[10] hyvinvointikeskus[10] tilakeskus[10] monitoimitalo[9]
  • topic7=tiedekeskus[3] museo-[2] prosenttipohjainen[2] taidehankinta[1] uudisrakennushanke[1] hankintam√§√§r√§raha[1] prosenttitaide[1] hankintaprosessi[0] toteutusajankohta[0] ulosvuokrattava[0]
  • topic8=euro[323] milj[191] vuosi[150] oulu[107] talousarvio[100] tilinp√§√§t√∂s[94] kaupunginhallitus[83] kaupunki[79] 2012[73] 2013[68]
  • topic9=p√§√§t√∂s[653] oikaisuvaatimus[335] oulu[295] kaupunki[218] p√§iv√§[215] voi[211] kaupunginhallitus[208] posti[187] p√∂yt√§kirja[161] viimeinen[154]

Even the smaller topics here seem much better now with the increase in passes over the corpus. So perhaps the real difference just comes from having enough passes over the data, giving the algorithms more time and data to refine the models. At least I would not try without multiple passes based on comparing the results here of 1 vs 20 passes.

For example, topic2 here has small numbers but still all items seem related to grey market economy. Similarly, topic7 has small numbers but the words are mostly related to arts and culture.

So to summarize, it seems lemmatizing your words, exploring your parameters, and ensuring to have a decent amount of data and decent number of passes for the algorithm are all good points. And properly cleaning your data, and iterating over the process many times to get these right (well, as “right”as you can).

To answer my “research questions” from the beginning: topic modelling for different languages and use cases for topic modelling.

First, lemmatize all your data (I prefer it over stemming but it can be more resource intensive). Clean all your data from the typical stopwords for your language, but also for your dataset and domain. Run the models and analysis several times, and keep refining your list of removed words to clean also based on your use case, your dataset and your domain. Also likely need to consider domain specific lemmatization rules as I already discussed with POS tagging.

Secondly, what use cases did I find looking at topic modelling use cases online? Actually, it seems really hard to find concrete actual reports of uses for topic models. Quora has usually been promising but not so much this time. So I looked at reports in the published research papers instead, trying to see if any companies were involved as well.

Some potential use cases from research papers:

Bug localization, as in finding locations of bugs in source code is investigated here. Source code (comments, source code identifiers, etc) is modelled as topics, which are mapped to a query created from a bug report.

Matching duplicates of documents in here. Topic distributions over bug reports are used to suggest duplicate bug reports. Not exact duplicates but describing the same bug. If the topic distributions are close, flag them as potentially discussing the same “topic” (bug).

Ericsson has used topic models to map incoming bug reports to specific components. To make resolving bugs easier and faster by automatically assigning them to (correct) teams for resolution. Large historical datasets of bug reports and their assignments to components are used to learn the topic models. Topic distributions of incoming bug reports are used to give probability rankings for the bug report describing a specific component, in comparison to topic distributions of previous bug reports for that component. Topic distributions are also used as explanatory data to present to the expert looking at the classification results. Later, different approaches are reported at Ericsson as well. So just to remind that topic models are not the answer to everything, even if useful components and worth a try in places.

In cyber security, this uses topic models to describe users activity as distributions over the different topics. Learn topic models from user activity logs, describe each users typical activity as a topic distribution. If a log entry (e.g., session?) diverges too much from this topic distribution for the user, flag it as an anomaly to investigate. I would expect simpler things could work for this as well, but as input for anomaly detection, an interesting thought.

Tweet analysis is popular in NLP. This is an example of high-level tweet topic classification: Politics, sports, science, … Useful input for recommendations etc., I am sure. A more targeted domain specific example is of using topics in Typhoon related tweet analysis and classification: Worried, damage, food, rescue operations, flood, … useful input for situation awareness, I would expect. As far as I understood, topic models were generated, labeled, and then users (or tweets) assigned to the (high-level) topics by topic distributions. Tweets are very small documents, so that is something to consider, as discussed in those papers.

Use of topics models in biomedicine for text analysis. To find patterns (topic distributions) in papers discussing specific genes, for example. Could work more broadly as one tool to explore research in an area, to find clusters of concepts in broad sets of research papers on a specific “topic” (here a research on a specific gene). Of course, there likely exist number of other techniques to investigate for that as well, but topic models could have potential.

Generally labelling and categorizing large number of historical/archival documents to assist users in search. Build topic models, have experts review them, and give the topics labels. Then label your documents based on their topic distributions.

Bit further outside the box, split songs into segments based on their acoustic properties, and use topic modelling to identify different categories/types of music in large song databases. Then explore the popularity of such categories/types over time based on topic distributions over time. So the segments are your words, and the songs are your documents.

Finding image duplicates of images in large data sets. Use image features as words, and images as documents. Build topic models from all the images, and find similar types of images by their topic distributions. Features could be edges, or even abstract ones such as those learned by something like a convolutional neural nets. Assists in image search I guess..

Most of these uses seem to be various types of search assistance, with a few odd ones thinking outside the box. With a decent understanding, and some exploration, I think topic models can be useful in many places. The academics would sayd “dude XYZ would work just as well”. Sure, but if it does the job for me, and is simple and easy to apply..

Word2Vec with some Finnish NLP

To get a better view of the popular Word2Vec algorithm and its applications in different contexts, I ran experiments on Finnish language and Word2vec. Let’s see.

I used two datasets. First one is the traditional Wikipedia dump. I got the Wikipedia dump for the Finnish version from October 20th. Because I ran the first experiments around that time. The seconds dataset was the Board minutes for the City of Oulu for the past few years.

After running my clearning code on the Wikipedia dump it reported 600783 sentences and 6778245 words for the cleaned dump. Cleaning here refers to removing all the extra formatting, HTML tagging, etc. Sentences were tokenized using Voikko. For the Board minutes the similar metrics were 4582 documents, 358711 sentences, and 986523 words. Most interesting, yes?

For running Word2vec I used the Deeplearning4J implementation. You can find the example code I used on Github.

Again I have this question of whether to use lemmatization or not. Do I run the algorithm on baseformed words or just unprocessed words in different forms?

Some prefer to run it after lemmatization, while generally the articles on word2vec say nothing on the topic but rather seem to run it on raw text. This description of a similar algorithm actually shows and example of mapping “frog” to “frogs”, further indicating use of raw text. I guess if you have really lots of data and a language that does not have a huge number of forms for different words that makes more sense. Or if you find relations between forms of words more interesting.

For me, Finnish has so many forms of words (morphologies or whatever they should be called?) and generally I don’t expect to run with hundreds of billions of words of data, so I tried both ways (with and without lemmatization) to see. With my limited data and the properties of the Finnish language I would just go with lemmatization really, but it is always interesting to try and see.

Some results for my experiments:

Wikipedia without lemmatization, looking for the closest words to “auto”, which is Finnish for “car”. Top 10 results along with similarity score:

  • auto vs kuorma = 0.6297630071640015
  • auto vs akselin = 0.5929439067840576
  • auto vs auton = 0.5811734199523926
  • auto vs bussi = 0.5807990431785583
  • auto vs rekka = 0.578578531742096
  • auto vs linja = 0.5748337507247925
  • auto vs ty√∂ = 0.562477171421051
  • auto vs autonkuljettaja = 0.5613142848014832
  • auto vs rekkajono = 0.5595266222953796
  • auto vs moottorin = 0.5471497774124146

Words from above translated:

  • kuorma = load
  • akselin = axle’s
  • auton = car’s
  • bussi = bus
  • rekka = truck
  • linja = line
  • ty√∂ = work
  • autonkuljettaja = car driver
  • rekkajono = truck queue
  • moottorin = engine’s

A similarity score of 1 would mean a perfect match, and 0 a perfect mismatch. Word2vec builds a model representing position of words in “vector-space”. This is inferred from “word-embeddings”. This sounds fancy, and as usual, it is difficult to find a simple explanation of what is done. I view it a taking typically 100-300 numbers to represent each numbers relation in the “word-space”. These get adjusted by the algorithm as it goes through all the sentences and records each words relation to other words in those sentences. Probably all wrong in that explanation but until someone gives a better one..

To preprocess the documents for word2vec, I split the documents to sentences to give the words a more meaningful context (a sentence vs just any surrounding words). There are other similar techniques, such as Glove, that may work better with more global “context” than a sentence. But anyway this time I was playing with Word2vec, which I think is also interesting for many things. It also has lots of implementations and popularity.

Looking at the results above, there is the word “auton”, translating to “car’s”. Finnish language has a a large number of forms that different words can take. So, sometimes, it may be good to lemmatize to see what the meaning of the word better maps to vs matching forms of words. So I lemmatize with Voikko, the Finnish language lemmatizer again. Re-run of above, top-10:

  • auto vs ajoneuvo = 0.7123048901557922
  • auto vs juna = 0.6993820667266846
  • auto vs rekka = 0.6949941515922546
  • auto vs ajaa = 0.6905277967453003
  • auto vs matkustaja = 0.6886627674102783
  • auto vs tarkoitettu = 0.66249680519104
  • auto vs rakennettu = 0.6570218801498413
  • auto vs kuljetus = 0.6499230861663818
  • auto vs rakennus = 0.6315782070159912
  • auto vs alus = 0.6273047924041748

Meanings of the words in English:

  • ajoneuvo = vehicle
  • juna = train
  • rekka = truck
  • ajaa = drive
  • matkustaja = passenger
  • tarkoitettu = meant
  • rakennettu = built
  • kuljetus = transport
  • rakennus = building
  • alus = ship

So generally these mappings make some sense. Not sure about those building words. Some deeper exploration would probably help..

Some people also came up with the idea of POS tagging before running word2vec. Called it Sense2Vec and whatever. Just so you could better differentiate how different meanings of a word map differently. So to try to POS tag with the tagger I implemented before. Results:

  • auto_N vs juna_N = 0.7195479869842529
  • auto_N vs ajoneuvo_N = 0.6762610077857971
  • auto_N vs alus_N = 0.6689988970756531
  • auto_N vs kone_N = 0.6615594029426575
  • auto_N vs kuorma_N = 0.6477057933807373
  • auto_N vs tie_N = 0.6470917463302612
  • auto_N vs sein√§_N = 0.6453390717506409
  • auto_N vs kuljettaja_N = 0.6449363827705383
  • auto_N vs matka_N = 0.6337422728538513
  • auto_N vs p√§√§_N = 0.6313328146934509

Meanings of the words in English:

  • juna = train
  • ajoneuvo = vehicle
  • alus = ship
  • kone = machine
  • kuorma = load
  • tie = road
  • sein√§ = wall
  • kuljettaja = driver
  • matka = trip
  • p√§√§ = head

soo… The weirdest ones here are the wall and head parts. Perhaps again a deeper exploration would tell more. The rest seem to make some sense just by looking.

And to do the same for the City of Oulu Board minutes. Now looking for a specific word for the domain. The word being “serviisi”, which is the city office responsible for food production for different facilities and schools. This time lemmatization was applied for all results. Results:

  • serviisi vs tietotekniikka = 0.7979459762573242
  • serviisi vs ty√∂terveys = 0.7201094031333923
  • serviisi vs pelastusliikelaitos = 0.6803742051124573
  • serviisi vs kehitt√§misvisio = 0.678106427192688
  • serviisi vs liikel = 0.6737961769104004
  • serviisi vs j√§tehuolto = 0.6682301163673401
  • serviisi vs serviisin = 0.6641604900360107
  • serviisi vs konttori = 0.6479293704032898
  • serviisi vs efekto = 0.6455909013748169
  • serviisi vs atksla = 0.6436249017715454

because “serviisi” is a very domain specific word/name here, the general purpose Finnish lemmatization does not work for it. This is why “serviisin” is there again. To fix this, I added this and some other basic forms of the word to the list of custom spellings recognized by my lemmatizer tool. That is, using Voikko but if not found trying a lookup in a custom list. And if still not found, writing a list of all unrecognized words sorted by highest frequency first (to allow augmenting the custom list more effectively).

Results after change:

  • serviisi vs tietotekniikka = 0.8719592094421387
  • serviisi vs ty√∂terveys = 0.7782909870147705
  • serviisi vs johtokunta = 0.695137619972229
  • serviisi vs liikelaitos = 0.6921887397766113
  • serviisi vs 19.6.213 = 0.6853622794151306
  • serviisi vs tilakeskus = 0.673351526260376
  • serviisi vs j√§tehuolto = 0.6718368530273438
  • serviisi vs pelastusliikelaitos = 0.6589146852493286
  • serviisi vs oulu-koilismaan = 0.6495324969291687
  • serviisi vs bid=2300 = 0.6414187550544739

Or another run:

  • serviisi vs tietotekniikka = 0.864517867565155
  • serviisi vs ty√∂terveys = 0.7482070326805115
  • serviisi vs pelastusliikelaitos = 0.7050554156303406
  • serviisi vs liikelaitos = 0.6591876149177551
  • serviisi vs oulu-koillismaa = 0.6580390334129333
  • serviisi vs bid=2300 = 0.6545186638832092
  • serviisi vs bid=2379 = 0.6458192467689514
  • serviisi vs johtokunta = 0.6431671380996704
  • serviisi vs rakennusomaisuus = 0.6401894092559814
  • serviisi vs tilakeskus = 0.6375274062156677

So what are all these?

  • tietotekniikka = city office for ICT
  • ty√∂terveys = occupational health services
  • liikelaitos = company
  • johtokunta = board (of directors)
  • konttori = office
  • tilakeskus = space center
  • pelastusliikelaitos = emergency office
  • energia = energy
  • oulu-koilismaan = name of area surrounding the city
  • bid=2300 is an identier for one of the Serviisi board meeting minutes main pages.
  • 19.6.213 seems to be a typoed date and could at least be found in one of the documents listing decisions by different city boards.

So almost all of these words that “serviisi” is found to be closest to are other city offices/companies responsible for different aspects of the city. Such as ICT, energy, office space, emergency response, of occupation health. Makes sense.

OK, so much for the experimental runs. I should summarize something about this.

The wikipedia results seem to give slightly better results in terms of the words it suggests being valid words. For the city board minutes I should probably filter more based on presence of special characters and numbers. Maybe this is the case for larger datasets vs smaller ones, where the “garbage” more easily drowns in the larger sea of data. Don’t know.

The word2vec algorithm also has a set of parameters to tune, which probably would be worth more investigation to get more optimized results for these different types of datasets. I simply used the same settings for both the city minutes and Wikipedia. Yet due to size differences, likely it would be interesting to play at least with the size of the vector space. For example, bigger datasets might benefit more from having a bigger vector space, which should enable them to express richer relations between different words. For smaller sets, a smaller space might be better. Similarly, number of processing iterations, minimum word frequencies etc should be tried a bit more. For me the goal here was to get a general idea on how this works and how to use it with Finnish datasets. For this, these experiments are enough.

If you read up on any articles of Word2Vec you will likely also see the hype on the ability to do equations such as “king – man + woman” = “queen”. These are from training on large English corpuses. It simply says that the relation of the word “queen” to word “woman” in sentences is typically the same as the relation of the word “king” to “man”. But then this is often the only or one of very few examples ever. Looking at the city minutes example here, since “serviisi” seems to map closest to all the other offices/companies of the city, what do we get if we run the arithmatic on “serviisi-liikelaitos” (so liikelaitos would be the common concept of the office/company). I got things like “city traffic”, “reduce”, “children home”, “citizen specific”, “greenhouse gas”. Not really useful. So this seems most useful as a potential tool for exploration but cannot really say which part gives useful results when. But of course, it is nice to report on the interesting abstractions it finds, not on boring fails.

I think lemmatization in these cases I showed here makes sense. I have no interest in just knowing that a singular form of a word is related to a plural form of the same word. But I guess in some use cases that could be valid. Of course, for proper lemmatization you might also wish to first do POS tagging to be able to choose the correct baseforms from all the options presented. In this case I just took the first baseform from the list Voikko gives for each word.

Tokenization could also be of more interest. Finnish language has a lot of compound words, some of which are visible in the above examples. For example, “kuorma-auto”, and “linja-auto” for the wikipedia example. Or the different “liikelaitos” combinations for the city of Oulu version. Further n-grams (combinations of words) would be useful to investigate further. For example, “energia” in the city example could easily be related to the city power company called “Oulun Energia”. Many similar examples likely can be found all over any language and domain vocabulary.

Further custom spelling would also be useful. For example, “oulu-koilismaan” above could be spelled as “oulu-koillismaan”. And it could further be baseformed with other forms of itself as “oulu-koillismaa”. Collecting these from the unrecognized words should make this relatively easy, and filtering out the low-frequency occurrences of the words.

So perhaps the most interesting question, What is this good for?

Not synonym search. Somehow over time I got the idea word2vec could give you some kind of synonums and stuffs. Clearly it is not for that but rather to identify words over similar concepts and the like.

So generally I can see it could be useful for exploring related concepts in documents. Or generally exploring datasets and building concept maps, search definitions, etc. More as an input to the human export work rather than fully automated as the results vary quite a bit.

Some interesting applications I found while looking at this:

  • Word2vec in Google type search, as well as search in general.
  • Exploring associations between medical terms. Perhaps helpful identify new links you did not think of before? Likely would match other similar domains as well.
  • Mapping words in different languages together.
  • Spotify mapping similar songs together via treating songs as words and playlists as sentences.
  • Someone tried it on sentiment analysis. Not really sure how useful that was as I just skimmed the article but in general I can see how it could be useful to find different types of words related to sentiments. As before, not necessarily as automated input but rather as input to an expert to build more detailed models.
  • Using the similarity score weights as means to find different topics. Maybe you could combine this with topic modelling and the look for diversity of topics?
  • Product recommendations by using products as words and sequences of purchases as sentences. Not sure how big is the meaning of purchase order but interesting idea.
  • Bet recommendations by modelling bets made by users as bet targets being words and sequences of bets sentences, finding similarities with other bets to recommend.

So that was mostly that. Similar tools exist for many platforms, whatever gives you the kicks. For example, Voikko has some python module on github to use and Gensim is a nice tool for many NLP processing tasks, including Word2Vec on python.

Also lots of datasets, especially for the English language, to use as pretrained word2vec models. For example, Facebooks FastText, Stanfords Glove datasets, Google news corpus from here. Anyway, some simple internet searches should turn out many such to use, which I think is useful for general purpose results. For more detailed domain specific ones training is good as I did here for the city minutes..

Many tools can also take in word vector models built with some other tool. For example, deeplearning4j mentions import of Glove models and Gensim lists support for FastText, VarEmbed and WordRank. So once you have some good idea of what such models can do and how to use them, building combinations of these is probably not too hard.

Finnish POS tagging part 2

Previously I wrote about Building a Finnish POS tagger. This post is to elaborate a bit on training with OpenNLP, which I skimmed last time, put the code for it out, and do some additional tests on it.

I am again using the Finnish Treebank to get 4.4M pre-tagged sentences to train on. Start with a Python script to transform the Treebank XML into an OpenNLP suitable format. A short example of the output below, in the format OpenNLP takes as input (at least in the configuration I used). One line contains one sentence, each word with associated POS tag, word and tag separated with an underscore “_”.

  • 1_Num artikla_N Nimi_N ja_CC tarkoitus_N
  • Hankintakeskukseen_N sovelletaan_V perustamissopimuksen_N ja_CC t√§m√§n_Pron peruss√§√§nn√∂n_N m√§√§r√§yksi√§_N ._Punct
  • Hankintakeskuksen_N toiminnan_N kestolle_N ei_V aseteta_V m√§√§r√§aikaa_N ._Punct

The tags have been assigned by human experts who provide the Treebank. The whole Treebank file is parsed and output similar to above is generated by the Python script.

Check Github for the code to train the OpenNLP tagger. Or use the command line options.

Previously I described the test results using the Treebank data with a train/test split, showing reasonably good results. However, how well does it work in practice with some simple test sentences? Does it matter how the training and tagger input data is pre-processed? What do I mean by pre-processed?

Stemming and lemmatization are two basic transformations that are often used in NLP. Stemming is a process of cutting the ending of a word to get simple version that matches all different forms of the word. The result is not always a real “word”. For example, “argue”, “arguing”, “argus” could all stem to “argu”. Lemmatization on the other hand produces more “real” words (the Wikipedia link describes it as producing the dictionary base forms).

A related question that came to my mind: Does it matter if you stem/lemmatize your words you give as input to the tagger to train and test? I could not find a good answer on Google. One question on Stack Overflow about stemming vs POS tagging. And the response seems to be not to give an answer but riddles… Who would’ve guessed about the machine learning community? ūüėõ

Well, reading the discussion and other answers on the StackOverflow page seems to suggest not to stem before POS tagging. And the wikipedia pages on stemming and lemmatization describe the difference as in Lemmatization requiring the context (the POS tag) to properly function. Which makes sense, since words can have multiple meaning depending on their context (part of speech). So therefore we should probably conclude that it is better to not stem or lemmatize before training a POS tagger (or using it I guess). But common sense never stopped us before, so lets try it.

To see for myself, I tried to train and use the tagger with some different configurations:

  • Tagger: Plain = Takes words in the sentence and tries to POS tag them as is. Not stemmed, not lemmatized, just as they are.
  • Tagger: Voikko = Takes words in the sentence, converts them to baseform (lemma?), reconstructs the sentence from the baseformed words. You can see the actual results and effect in the output column in the results table below.
  • Trained on: 100k = The tagger was trained on the first 100k sentences in the Finnish Treebank.
  • Trained on: 4M = The tagger was trained on the first 4M sentences in the Finnish Treebank.
  • Trained on: basecol = The tagger was trained on baseform column of the treebank.
  • Trained on: col1 = The tagger was trained on column 1 of the treebank, containing the unprocessed words (no baseforming or anything else).
  • Trained on: voikko = The tagger was trained on column 1 of the treebank, but before training all words in the sentence were baseformed using Voikko. Similar to “Tagger: Voikko” but for training data.
  • Input: The input sentence fed to the tagger. This was split to an array on whitespace, as the OpenNLP tagger takes an array of words for sentence as input.
  • Output: The output from the tagger, formatted as word_tag. Word = the word given to the tagger as input for that part of the sencence, tag = the POS tag assigned by the tagger for that word.

So the Treebank actually has a “baseform” column that is described in the Treebank docs as having the baseform of each word. However, I do not have the tool used for the Treebank to baseform the words. Maybe it was manually done by the people who also tagged the sentences. Don’t know. I use Voikko as a tool to baseform words.

I still wanted to try the use of the baseform column in the Treebank so I ran all the words (baseform col and col1) in the Treebank through Voikko to see if it would recognize them. Recorded all the misses and sorted them highest occurence count to lowest. This showed me that the Treebank has its own “oddities”. Some examples:

  • “merkitt√§v√§” becomes “merkitt√§√§”
  • “p√§iv√§st√§” becomes “p√§iv√§nen”
  • “ty√∂paikkoja” becomes “ty√∂#paikko”

These are just a few examples of highly occurring and odd looking baseforms in the Treebank. None of these, in my opinion, map quite directly to understandable Finnish words. And Voikko provides different results (gives different baseform for “merkitt√§v√§”, “p√§iv√§st√§”, etc), so the two baseforming approaches would not match. I wanted results that I felt I could show to people who would understand what they meant. On the other hand, some of the words in the Treebank are quite domain-specific and valid but Voikko does not recognize them. Common Treebank examples of this include “CN-koodeihin”, “CN-koodiin”, “ETY-tyyppihyv√§ksynn√§n”, “ETY-tyyppihyv√§ksynt√§√§n”, “l√§sn√§ollessa”. Treebank has valid baseforms for these but Voikko does not recognize these specific ones.

So I just tried it with the different configuration versions above, as illustrated in the results table below:

Tagger Trained on Input Output
Plain 100k basecol junassa on vessa junassa_N on_A vessa_N
tuli tuli tulipesästä tuli_N tuli_N tulipesästä_V
voi on maukasta leivän päällä voi_N on_A maukasta_N leivän_PrfPrc päällä_Abbr
juodaan jaffaa ladassa juodaan_Unkwn jaffaa_Punct ladassa_Unkwn
liika vesi vesitti kilpailun liika_N vesi_N vesitti_N kilpailun_Abbr
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_N ovat_Unkwn todennäköisimmin_Adv rutiininomaiset_Unkwn tietokannan_Unkwn ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_Unkwn sinulla_V on_Unkwn tuollaisia_Unkwn lauseita_Unkwn tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_CS jotain_Adv piti_N keksi√§_V :(_Punct
Plain 100k col1 junassa on vessa junassa_N on_V vessa_N
tuli tuli tulipesästä tuli_V tuli_V tulipesästä_N
voi on maukasta leivän päällä voi_V on_V maukasta_N leivän_N päällä_N
juodaan jaffaa ladassa juodaan_V jaffaa_CC ladassa_N
liika vesi vesitti kilpailun liika_N vesi_N vesitti_V kilpailun_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_N ovat_V todennäköisimmin_Adv rutiininomaiset_A tietokannan_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_N sinulla_N on_V tuollaisia_A lauseita_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Abbr kun_CS jotain_Pron piti_V keksi√§_A :(_Punct
Voikko 100k voikko junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_V tuli_N tulipesä_N
voi on maukasta leivän päällä voi_V olla_V maukas_N leipä_N pää_N
juodaan jaffaa ladassa juoda_V jaffa_CC lada_V
liika vesi vesitti kilpailun liika_Adv vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_Adv rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_N olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_CS jokin_Pron pit√§√§_V keksi_N :(_Punct
Voikko 100k basecol junassa on vessa juna_N olla_V vessa_Unkwn
tuli tuli tulipesästä tuli_N tuli_N tulipesä_N
voi on maukasta leivän päällä voi_N olla_V maukas_N leipä_N pää_N
juodaan jaffaa ladassa juoda_PrsPrc jaffa_CC lada_PrsPrc
liika vesi vesitti kilpailun liika_N vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_Adv rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_CS jokin_Pron pit√§√§_V keksi_Adv :(_Punct
Plain 4M basecol junassa on vessa junassa_Unkwn on_Unkwn vessa_Unkwn
tuli tuli tulipesästä tuli_N tuli_N tulipesästä_Punct
voi on maukasta leivän päällä voi_N on_V maukasta_Unkwn leivän_Abbr päällä_Abbr
juodaan jaffaa ladassa juodaan_Unkwn jaffaa_Punct ladassa_Unkwn
liika vesi vesitti kilpailun liika_N vesi_N vesitti_N kilpailun_Abbr
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_A ovat_Unkwn todennäköisimmin_Adv rutiininomaiset_Unkwn tietokannan_Adv ylläpitotoimet._Abbr
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_Unkwn sinulla_Unkwn on_Unkwn tuollaisia_Unkwn lauseita_Punct tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_Punct jotain_Adv piti_N keksi√§_PrfPrc :(_Punct
Plain 4M col1 junassa on vessa junassa_N on_V vessa_N
tuli tuli tulipesästä tuli_V tuli_V tulipesästä_N
voi on maukasta leivän päällä voi_V on_V maukasta_A leivän_N päällä_N
juodaan jaffaa ladassa juodaan_V jaffaa_V ladassa_PrsPrc
liika vesi vesitti kilpailun liika_A vesi_N vesitti_V kilpailun_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syynä_N ovat_V todennäköisimmin_Adv rutiininomaiset_A tietokannan_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct miksi_Pron sinulla_Pron on_V tuollaisia_A lauseita_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_Punct jotain_Pron piti_V keksi√§_N :(_Punct
Voikko 4M col1 junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_V tuli_V tulipesä_N
voi on maukasta leivän päällä voi_V olla_V maukas_A leipä_N pää_N
juodaan jaffaa ladassa juoda_V jaffa_Num lada_V
liika vesi vesitti kilpailun liika_A vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_A rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_Punct jokin_Pron pit√§√§_V keksi_N :(_Punct
Voikko 4M voikko junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_V tuli_N tulipesä_N
voi on maukasta leivän päällä voi_V olla_V maukas_A leipä_N pää_N
juodaan jaffaa ladassa juoda_N jaffa_N lada_N
liika vesi vesitti kilpailun liika_Adv vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_A rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_Punct jokin_Pron pit√§√§_V keksi_N :(_Punct
Voikko 4M basecol junassa on vessa juna_N olla_V vessa_N
tuli tuli tulipesästä tuli_N tuli_N tulipesä_N
voi on maukasta leivän päällä voi_N olla_V maukas_A leipä_N pää_N
juodaan jaffaa ladassa juoda_V jaffa_N lada_V
liika vesi vesitti kilpailun liika_N vesi_N vesittää_V kilpailu_N
syynä ovat todennäköisimmin rutiininomaiset tietokannan ylläpitotoimet. syy_N olla_V todennäköinen_Adv rutiininomainen_A tietokanta_N ylläpitotoimet._Punct
teemu, miksi sinulla on tuollaisia lauseita tuossa? teemu,_Punct mikä_Pron sinä_Pron olla_V tuollainen_A lause_N tuossa?_Punct
no kun jotain piti keksi√§ ūüė¶ no_Interj kun_CS jokin_Pron pit√§√§_V keksi_N :(_Punct

You can find all the POS tags etc. listed and explained in the Treebank Manual. Here are most of the above:

  • N = Noun
  • V = Verb
  • PrfPrc = Past participle
  • A = Adjective
  • CS = Subordinating conjunction
  • Abbr = Abbreviation
  • Num = Numeral
  • Punct = Punctuation
  • Adv = Adverb
  • Unkwn = Unknown

Some of these (CS, PrfPrc, Adv, …) are bit more detailed than I ever want to get after leaving primary school 100 years ago. That is to say, I have no idea what they mean. Luckily I am really only interested in the POS tag as input to other algoritms so don’t really care what they are as long as they are correct and help to differentiate the words in context. Of course, with my lack of the language nuances and academic details of all those tags, I am not very good at judging the correctness of the taggings above. But a few notes anyway:

  • Using the baseform column from the Treebank to train the tagger and to tag unprocessed sentences (tagger “plain”): Lots of unknowns and failed taggings in general. Size of training corpus makes little difference.
  • Using Treebank col 1 to train and the “plain” tagger gives better results. Still it has some issues but most general cases are not too bad.
  • Baseforming all words in the sentence to be tagged with Voikko (tagger “Voikko”) and using col 1 to train results in about similar performance as “plain” tagger with col 1.
  • Tagger “Voikko” with training type “voikko” and 4M sentences seems to give the best match. It has some issues though.
  • Baseforming the sentence to tag with Voikko has a chicken and egg problem (as mentioned in the Wikipedia links I put high above). You can get multiple baseforms for a word, depending on what POS the word is. If you need to define this to do POS tagging, then how do you pick which one to use? For example “keksi√§” in Finnish refers to “innovating” but could also mean “cookie”. Here, I just used the first baseform of a word given by Voikko, which for “keksi√§” just happens to be the one for “cookie”. When the correct one in this case would be the “innovation” one..
  • As there are two different baseforming approaches here (Voikko and Treebank baseform col), mixing them causes worse results than using a unified baseforming approach (Voikko for both training and later tagging). So better to stick with just the same baseformer/lemmatizer for all data.
  • Special elements such as smileys would need to be trained separately :). Here they are just treated as punctuation.
  • “Jaffa” is a Finnish drink. It gets classified here correctly as N but also as numerical, punctuation, or verb. Maybe too rare a word or something? Numerical and punctuation are still odd.
  • Splitting with whitespace here causes issues with sentences ending in puctuation. The last words of sentences with “.”, “?”, or such, end up classified as “Punct”. Better splitting (tokenization) needed. Since punctuation is also trained on the tagger, it should not be just discarded though as I guess it can provide valuable context for the rest of the words.
  • Some of my test sentences I made up to be difficult to POS tag, and with very limited sentences above, this is likely not a generally representative case. For example, “Tuli tuli” can be translated as “Fire came” (intent here), “Fire fire”, “It came it came”, and probably valid taggings would also be “N V N”, “V N N”, “N N N”, “V V V”. Some of it might even be difficult for humans without broader context, although the “tulipes√§” (fireplace) would likely tip people off. Similarly “voi” could also be translated as “butter” (intent here) or “could”.
  • Much bigger tests would be very useful to categorize what can be tagged right, what causes issues, etc.
  • It would also be useful to have a system available to choose whether the sentence was tagged right or not, and to retrain further the tagger with the errors. Maybe use a generator to build further examples of such errors.

So I guess the better configurations here can do a reasonable job of tagging most sentences, as illustrated by these results and the ones I listed before (the accuracy test on Treebank test/train split).

Most obviously, words with multiple meanings (possible POS tags) still require some more tuning. Maybe something with broader context (e.g., previous sentences, following sentences, iterations, probabilistic approaches,..?)?

I am not so familiar with all the works, such as Google’s Parsey McParseface. Because you know, its deep learning and that is all the rage, right ? ūüôā Would be interesting to try, but the whole setup is more than I can do right now.

Better tuning of OpenNLP parameters might also help if I had more expertise on that, and the its mapping to Finnish language peculiarities. In general, I am sure I am missing plenty of magic tricks the NLP guru’s could use.

In generall, I guess it is most likely just better to train the tagger before lemmatization/baseforming as noted before.

What more can I summarize here? Not much, not further than the bullets and points above. But this may provide a useful starting point for those interested in POS tagging for Finnish. Possibly useful points for some other languages as well..