Skip to content

Commit

Permalink
deploy: bda16b2
Browse files Browse the repository at this point in the history
  • Loading branch information
yangwenz committed Apr 25, 2024
1 parent 36f9d6b commit b74e528
Show file tree
Hide file tree
Showing 2 changed files with 26 additions and 26 deletions.
24 changes: 12 additions & 12 deletions latest/tutorials/vision/feature_visualization_tf.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
"cells": [
{
"cell_type": "markdown",
"id": "2e5dbb39",
"id": "e2805f5e",
"metadata": {},
"source": [
"### Feature visualization (Tensorflow)"
]
},
{
"cell_type": "markdown",
"id": "9160f798",
"id": "e52bf8a4",
"metadata": {},
"source": [
"This is an example of feature visualization with a Tensorflow model. The feature visualization in OmniXAI is an optimization-based method, allowing to set different objectives, e.g., layer, channel, neuron or direction. For more information, please visit https://distill.pub/2017/feature-visualization/"
Expand All @@ -19,7 +19,7 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "fe216f24",
"id": "55c1dae6",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -31,7 +31,7 @@
{
"cell_type": "code",
"execution_count": 2,
"id": "452c2af5",
"id": "508b2b13",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
"id": "ce291673",
"id": "1ebdd662",
"metadata": {},
"source": [
"Here we choose the VGG16 model for demonstration (you may test other CNN models, e.g., ResNet). The target layer is the layer to analyze."
Expand All @@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "64b57fd0",
"id": "e0632243",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -60,7 +60,7 @@
},
{
"cell_type": "markdown",
"id": "15e3c800",
"id": "1f770115",
"metadata": {},
"source": [
"The first example is the \"layer\" objective, where we optimize the input image such that the average output of the layer is maximized."
Expand All @@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "dc721bae",
"id": "6430f852",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
"id": "e13cbc3a",
"id": "392d304b",
"metadata": {},
"source": [
"The second example is the \"channel\" objective, where the input image is optimized such that the output of the specified channel is maximized."
Expand All @@ -110,7 +110,7 @@
{
"cell_type": "code",
"execution_count": 5,
"id": "61799385",
"id": "bd0d943f",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -142,7 +142,7 @@
},
{
"cell_type": "markdown",
"id": "3df2eb9f",
"id": "93766ac9",
"metadata": {},
"source": [
"We can also consider a combination of multiple objectives. The default weight for each objective is 1.0. We can set other weights as well."
Expand All @@ -151,7 +151,7 @@
{
"cell_type": "code",
"execution_count": 6,
"id": "0242caef",
"id": "b718fb4d",
"metadata": {},
"outputs": [
{
Expand Down
28 changes: 14 additions & 14 deletions latest/tutorials/vision/feature_visualization_torch.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
"cells": [
{
"cell_type": "markdown",
"id": "cbe48f6e",
"id": "1e7671a7",
"metadata": {},
"source": [
"### Feature visualization (PyTorch)"
]
},
{
"cell_type": "markdown",
"id": "c1217a2c",
"id": "654d78c7",
"metadata": {},
"source": [
"This is an example of feature visualization with a Tensorflow model. The feature visualization in OmniXAI is an optimization-based method, allowing to set different objectives, e.g., layer, channel, neuron or direction. For more information, please visit https://distill.pub/2017/feature-visualization/"
Expand All @@ -19,7 +19,7 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "8e6f9fc6",
"id": "d3ae7a16",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -31,7 +31,7 @@
{
"cell_type": "code",
"execution_count": 2,
"id": "8e9f5cdb",
"id": "543d5aee",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -42,7 +42,7 @@
},
{
"cell_type": "markdown",
"id": "b29ed704",
"id": "99aebcd5",
"metadata": {},
"source": [
"Here we choose the VGG16 model for demonstration (you may test other CNN models, e.g., ResNet). The target layer is the layer to analyze."
Expand All @@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "38797d3e",
"id": "bad44ed4",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -62,7 +62,7 @@
},
{
"cell_type": "markdown",
"id": "4eb35c9f",
"id": "8c8cba47",
"metadata": {},
"source": [
"The first example is the \"layer\" objective, where we optimize the input image such that the average output of the layer is maximized."
Expand All @@ -71,7 +71,7 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "35faf532",
"id": "ff364761",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -103,7 +103,7 @@
},
{
"cell_type": "markdown",
"id": "6b8c3fdf",
"id": "a9eda779",
"metadata": {},
"source": [
"The second example is the \"channel\" objective, where the input image is optimized such that the output of the specified channel is maximized."
Expand All @@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": 5,
"id": "c231b61b",
"id": "54ba41fb",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -144,7 +144,7 @@
},
{
"cell_type": "markdown",
"id": "c3d87621",
"id": "47655dbd",
"metadata": {},
"source": [
"We can also consider a combination of multiple objectives. The default weight for each objective is 1.0. We can set other weights as well."
Expand All @@ -153,7 +153,7 @@
{
"cell_type": "code",
"execution_count": 6,
"id": "d1d7e16b",
"id": "47d62fed",
"metadata": {
"scrolled": true
},
Expand Down Expand Up @@ -190,7 +190,7 @@
},
{
"cell_type": "markdown",
"id": "b68212ff",
"id": "9232e2f7",
"metadata": {},
"source": [
"Let's try another target layer and use FFT preconditioning:"
Expand All @@ -199,7 +199,7 @@
{
"cell_type": "code",
"execution_count": 7,
"id": "6ec1c67f",
"id": "41bb6a0e",
"metadata": {},
"outputs": [
{
Expand Down

0 comments on commit b74e528

Please sign in to comment.