The conference features 12 mini-symposium sessions, gathering expert presentations on specific topics.
Applications of shape and structural optimization
Summary: T.B.A.
Summary: T.B.A.
Summary: T.B.A.
Calculus of variations
Summary: Let \Omega be a generic open set of the Euclidean space. We consider the Deny-Lions spaces, defined by the completion of the space of compactly supported smooth functions with respect to the L^p norm of the gradient. We give a characterization of continuous (or compact) embeddings of these spaces into L^q, in terms of the summability of the so-called torsion function of \Omega. We also introduce a new Hardy-type inequality, which plays an important role in the proofs. The results presented are contained in a recent paper written in collaboration with Berardo Ruffini (Montpellier).
Summary: We present a variational model from micromagnetics involving a nonlocal Ginzburg-Landau type energy for S^1-valued vector fields. These vector fields form domain walls, called Neel walls, that correspond to one-dimensional transitions between two directions within the unit circle S^1. Due to the nonlocality of the energy, a Neel wall is a two length scale object, comprising a core and two logarithmically decaying tails. Our aim is to determine the energy differences leading to repulsion or attraction between Neel walls. In contrast to the usual Ginzburg-Landau vortices, we obtain a renormalised energy for Neel walls that shows both a tail-tail interaction and a core-tail interaction. This is a novel feature for Ginzburg-Landau type energies that entails attraction between Neel walls of the same sign and repulsion between Neel walls of opposite signs. This is a joint work with Roger Moser (University of Bath).
Summary: It is well known from the seventies in phase transition theory, that minimizers of the Modica-Mortola functional tend to minimize the perimeter of the transition set in the limit. Recently with F. Santambrogio, we have proposed a modification of that functional so as to minimize the length of a compact connected set (instead of a perimeter), which gives a possible numerical method to tackle the so-called Steiner problem (finding the connected set of least length containing a prescribed family of points), or variants. In the time allotted to me, I shall present some new developments about existence and regularity of minimizers for this new approximative functional, together with a few numerical results. Those are coming from a work in progress with M. Bonnivard and V. Millot.
Summary: We investigate the existence of ground states for a focusing NLS functional with a power nonlinearity. The environment is a generic noncompact metric graph, that is, a metric space obtained by glueing together (by the identification of some of their endpoints) a finite number of intervals and half-lines, according to the topology of a graph. The admissible functions are subject to a mass constraint, and we will discuss how the value of the prescribed mass, the topology of the graph and its metric properties may interact, in order to guarantee (or prevent) the existence of minimisers.
Optimization
Summary: In a wide spectrum of topics, including inverse problems in imaging, one ends up solving a large-scale convex optimization problem; that is, one wants to minimize a sum of convex functions, possibly nonsmooth and composed with linear operators, with variables living in Hilbert spaces. We show that several existing proximal splitting algorithms, appropriate for this class of problems, are particular instances of the forward-backward iteration, expressed in a particular primal-dual product space.
Summary: Total variation (TV) denoising has been extensively studied in imaging sciences since its introduction by Rudin, Osher and Fatemi in 1992. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. "fake" edges) in flat areas. However, put aside numerical evidences, almost no theoretical results are available to backup these claims. This talk will be concerned with the geometric stability of TV denoising under small L^2 additive noise. In particular: Given a small neighbourhood around the support of Df where f is the clean function, when will the gradient support of the regularized solution be contained inside this neighbourhood? We shall characterize the regions where support instabilities can occur. For indicators of so-called "calibrable" sets (such as disks or properly eroded squares), our main result shows that under small L^2 additive noise, the level lines of the recovered function will cluster tightly around the edges of the original set. This is joint work with Antonin Chambolle, Vincent Duval and Gabriel Peyre.
Summary: The Alternating Direction Method of Multipliers (ADMM) is an optimization algorithm that aims at minimizing the sum of two functions by alternatively solving simpler subproblems linked to each function separately. This particular structure as well as its good convergence properties make the ADMM widely used, notably in the signal processing community. However, the ADMM depends on a free parameter, affecting greatly the convergence speed, which tuning remains difficult in most situations. In this presentation, we investigate different methods based on parameter tuning, but also relaxation and inertia, to practically accelerate the convergence of the ADMM.
Summary: In this talk, we consider the Forward-Backward splitting algorithm and its variants (inertial schemes, FISTA) for solving structured optimization problem. The goal of this talk is to establish the local convergence of these methods when the involved functions are partly smooth relative to an active manifold. We show that all these methods correctly identify the active manifolds in finite time, and then enter a local linear convergence regime, which is characterize precisely based on the geometry of the underlying manifold. The obtained result is verified by several concrete numerical experiments.
Control in finite dimension
Summary: A linear switched system is a system that commutes being two
or more linear dynamics. The trajectories are assumed to be
non-increasing which corrresponds for instance to systems whose energy
cannot increase. We analyze the behaviour of the trajectories which does
not tend to zero and we show that their limit trajectories are piecewise
analytic. This result implies that the switching laws for which the
system does not tend to zero are very rare, in a sense to be made precise.
Summary: It is well known that if the pair (A,B) satisfies the Kalman
rank condition, then the system x'=Ax+Bu, with state x and control u,
can be controlled in arbitrarily small time.
But when adding some constraints on the state variable, I will show that
this result fails and give a characterisation of the minimal time
required to steer a given initial point to a given target.
Some consequences of this result for the heat equation will be given.
Summary: It is well known that approximate controllability does not
imply in general exact controllability, as the classical irrational
winding on the torus shows. It is also known that approximate
controllability together with the Lie bracket generating condition
imply exact controllability and that for finite-dimensional quantum
systems exact controllability is equivalent to the Lie bracket
generating condition. In this talk, based on representation theory
considerations, we show that approximate and exact controllability are
equivalent properties for general closed finite-dimensional quantum
systems. We also give a new explicit characterization of zero-time
controllability for control-affine finite-dimensional quantum systems.
This talk is based on joint works with A. Agrachev, U. Boscain, J-P.
Gauthier, F. Rossi..
Summary: Carnot groups serve as model for the nilpotent approximation of the
tangent space in sub-Riemannian geometry. Their study is thus important
in order to understand more general sub-Riemannian structures. In this
talk, I will focus on step 2, free Carnot groups also known as the
Brockett integrator.
For the (3,6) Carnot group, O. Myasnichenko has found and described the
cut locus and he gave a conjecture for the shape of the cut locus for
the general (k,(k+1)/2) case. This problem was already proposed and
partially studied by R. Brockett in a 30 years old article.
In collaboration with Luca Rizzi, we have disproved Myasnichenko's
conjecture by finding a set of cut points which is the same as the one
proposed by Myasnichenko for the cases k=2 and k=3 but is strictly
bigger in all other cases that his when k>3.
In this talk, I will present Myasnichenko's conjecture and I will
explain how to get this bigger set of cut points. This is a work in
progress and, at this time, we cannot assert that our set is the cut
locus for the general (k,(k+1)/2) case (even when k=4).
Control of partial differential equations
Summary: We consider a damped wave equation on a smooth bounded domain, with Ventcel boundary condition, with a linear damping, acting either in the interior or at the boundary. This equation is a model for a vibrating structure with a layer with higher rigidity of small thickness. By means of a proper Carleman estimate for second-order elliptic operators near the boundary, we derive a resolvent estimate for the wave semigroup generator along the imaginary axis, which in turn yields the logarithmic decay rate of the energy.
Summary: Unique continuation is very often proved by Carleman estimates or Holmgren theorem. The first one requires the strong geometric assumption of pseudoconvexity of the hypersurface. The second one only requires that the hypersurface is non characteristic, but the coefficients need to be analytic. Motivated by the example of the wave equation, several authors (Tataru, Robbiano-Zuily, Hörmander) finally proved in great generality that there could be unique continuation in some intermediate situation where the coefficients are analytic in part of the variables. In particular, for the wave equation, it allowed to prove the unique continuation across any non characteristic hypersurface for non analytic metric. In this talk, after presenting these works, I will describe some recent work where we quantify this unique continuation. This leads to optimal (in general) logarithmic stability estimates. They quantify the penetration into the shadow region and the cost of approximate controllability for waves. This is a joint work with Matthieu Léautaud.
Summary: In this talk, we will discuss about Carleman estimates for one-dimensional fourth order parabolic equations and its applications to the exact controllability to the trajectories, cost of null controllability and stability of inverse problems.
Summary: We consider the one dimensional Schrödinger equation with a bilinear control and prove the rapid stabilization of the linearized equation around the ground state. The feedback law ensuring the rapid stabilization is obtained using a transformation that maps the solution of the linearized equation to the solution of an exponentially stable target linear equation. A uniqueness condition for the transformation is introduced to deal with the non-local terms arising in the kernel system. The continuity and invertibility of the transformation will follow from exact controllability of the linearized system. This is a joint work with J.-M. Coron and L. Gagnon.
Shape optimization under uncertainties
H. Harbrecht (Basel University (Switzerland)): Shape optimization for quadratic functionals and states with random right-hand sides [joint work with Marc Dambrine (Departement of Mathematics, Université de Pau et des Pays de l'Adour) and Charles Dapogny (Laboratoire Jean Kuntzmann, Université Joseph Fourier)].
Summary: In this talk, we investigate a particular class of shape optimization problems under uncertainties on the input parameters. More precisely, we are interested in the minimization of the expectation of a quadratic objective in a situation where the state function depends linearly on a random input parameter. This framework covers important objectives such as tracking-type functionals for elliptic second order partial differential equations and the compliance in linear elasticity. We show that the robust objective and its gradient are completely and explicitly determined by low- order moments of the random input. We then derive a cheap, deterministic algorithm to minimize this objective and present model cases in structural optimization.
Summary: Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design performance and in many cases can completely destroy the optimality of the solution. Therefore, the goal of this presentation is to review recent advancements in obtaining manufacturable topology-optimized designs. The focus is on methods for imposing minimum and maximum length scales, and ensuring manufacturable, well-defined designs with robust performances. The overview discusses the limitations, the advantages and the associated computational costs, and is exemplified with optimized designs for minimum compliance, mechanism design and heat transfer..
Summary: This talk deals with stochastic shape optimization for elastic materials. Stochastic loading will be considered with different perceptions of risk aversion. Complicated geometries will be treated both by a full resolution of the geometry and by a two-scale approach. Furthermore, the paradigm of stochastic dominance will be transfered from finite-dimensional stochastic programming to shape optimization. This allows for flexible risk aversion via comparison with benchmark configurations.
Hybrid inverse problems
Summary: This is joint work with H Ammari. The main focus of this talk is the
reconstruction of the signals $f$ and $g_{i}$, $i=1,\dots,N$, from the
knowledge of their sums $h_{i}=f+g_{i}$, under the assumption that $f$
and the $g_{i}$s can be sparsely represented with respect to two
different dictionaries $A_{f}$ and $A_{g}$. This generalises the
well-known ``morphological component analysis'' to a multi-measurement
setting. The main result states that $f$ and the $g_{i}$s can be
uniquely and stably reconstructed by finding sparse representations of
$h_{i}$ for every $i$ with respect to the concatenated dictionary
$[A_{f},A_{g}]$, provided that enough incoherent measurements $g_{i}$s
are available. The incoherence is measured in terms of their mutual
disjoint sparsity.
This method finds applications in the reconstruction procedures of
several hybrid imaging inverse problems, where internal data are
measured. These measurements usually consist of the main unknown
multiplied by other unknown quantities, and so the disjoint sparsity
approach can be directly applied. In this case, the feature that
distinguishes the two parts is the different level of smoothness. As an
example, I will show how to apply the method to the reconstruction in
quantitative photoacoustic tomography, also in the case when the
Grüneisen parameter, the optical absorption and the diffusion
coefficient are all unknown.
Summary: We provide a mathematical analysis and a numerical framework for
magnetoacoustic tomography with magnetic induction. The imaging problem
is to reconstruct the conductivity distribution of biological tissue
from measurements of the Lorentz force induced tissue vibration. We
begin with reconstructing from the acoustic measurements the divergence
of the Lorentz force, which is acting as the source term in the acoustic
wave equation. Then we recover the electric current density from the
divergence of the Lorentz force. To solve the nonlinear inverse
conductivity problem, we introduce an optimal control method for
reconstructing the conductivity from the electric current density.
We prove its convergence and stability. We also present a fixed point
approach and prove its convergence to the true solution. A new direct
reconstruction scheme involving a partial differential equation is then
proposed based on viscosity-type regularization to a transport equation
satisfied by the electric current density field. We prove that solving
such an equation yields the true conductivity distribution as the
regularization parameter approaches zero. Finally, we test the three
schemes numerically in the presence of measurement noise, quantify their
stability and resolution, and compare their performance.
Summary: We present a novel strategy to perform estimation for evolution equations with uncertain initial conditions and parameters. We adopt an observer approach to construct a joint state-parameter estimator that uses available measurements by incorporating data-based correction terms in the dynamical system, in order to converge to the target solution. First, state estimation is performed using a Luenberger observer, which allows for effectiveness and robustness. For parameter estimation we then incorporate the parameters in an augmented dynamical system, and perform reduced-rank Kalman-based filtering in the parameter space. The convergence of the resulting estimators can be mathematically established, and its effectiveness demonstrated in various contexts. The applications considered here are motivated by cardiology, to provide patient-specific simulations of high predictive value.
Summary: In this talk, I will present an inverse problem in photo-acoustic
tomography.
The aim is to recover and characterize the absorption coefficient of a
soft body. The inverse problem is formulated as a problem of optimal
control
in which the control variable is the coefficient to retrieve.
The result of existence of at least one optimal control was proved in ``
An optimal control problem in photoacoustic tomography " by M.
Bergounioux et al. (2014).
In this presentation, I will deal with the problem of the uniqueness of
the optimal solution (absorption coefficient) and also I will present a
study on the sensitivity of this solution with respect to variations of
the source of illumination and with respect to observation.
Imaging and inverse modeling
Summary: We propose a Luenberger observer for reaction-diffusion models with propagating front features, and for data associated with the location of the front over time. Such models are considered in various application fields, such as electrophysiology, wild-land fire propagation and tumor growth modeling. Drawing our inspiration from image processing methods, we start by proposing an observer for the eikonal-curvature equation that can be derived from the reaction-diffusion model by an asymptotic expansion. We then carry over this observer to the underlying reaction-diffusion equation by an "inverse asymptotic analysis", and we show that the associated correction in the dynamics has a stabilizing effect for the linearized estimation error. We also discuss the extension to joint state-parameter estimation by using the earlier-proposed ROUKF strategy. We then illustrate and assess our proposed observer method with test problems pertaining to electrophysiology modeling, including with a realistic model of cardiac atria. Our numerical trials show that state estimation is directly very effective with the proposed Luenberger observer, while specific strategies are needed to accurately perform parameter estimation as is usual with Kalman filtering used in a nonlinear setting and we demonstrate two such successful strategies.
Summary: The purpose of this work is to study the influence of errors and uncertainties of the imput data, like the conductivity, on the electrocardiographic imaging (ECGI) inverse solution. In order to do that, we propose a new stochastic optimal control formulation, permitting to calculate the distribution of the electric potentiel on the heart from the measurement on the body surface. The discretization is done using stochastic Galerkin method allowing to separate random and deterministic variables. Then, the problem is discretized, in the spatial domain using the finite element method and the polynomial chaos expansion in the stochastic domain. The considered problem is solved using a conjugate gradient method where the gradient of the cost function is computed with an adjoint technique. The efficiency of this approach to solve the inverse problem and the usability to quantify the effect of conductivity uncertainties in the torso are demonstrated through a number of numerical simulations on a 2D analytical geometry and on a 2D cross section of a real torso.
Summary: This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed we ag-gregate a Luenberger observer for the mechanical state and a Reduced Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart.
Optimal transport
Summary: This talk will expose a notion of barycentric coordinates for histograms via optimal transport. This is formulated as the problem of finding the Wasserstein barycenter of several basis histograms that best fits an input histogram according to some arbitrary loss function. A Wasserstein barycenter is a histogram in-between the basis histograms according to the optimal transport metric ; we use this geometry to project an input histogram onto the set of possible Wasserstein barycenters. We developed an efficient and robust algorithm from an automatic differentiation of the Sinkhorn iterative procedure.
We illustrate our algorithm with applications in computer graphics, such as color grading an input image using a database of photographs, or filling-in missing values in captured reflectance functions or geometries based on similar exemplars.
Summary: Principal Component Analysis (PCA) in a linear space is certainly the most widely used approach in multivariate statistics to summarize efficiently the information in a data set. In this talk, we are concerned by the statistical analysis of data sets whose elements are histograms with support on the real line. For the purpose of dimension reduction and data visualization of variables in the space of histograms, it is of interest to compute their principal modes of variation around a mean element. However, since the number, size or locations of significant bins may vary from one histogram to another, using PCA in an Euclidean space is not an appropriate tool. In this work, an histogram is modeled as a probability density function (pdf) with support included in an interval of the real line, and the Wasserstein metric is used to measure the distance between two histograms. In this setting, the variability in a set of histograms can be analyzed via the notion of Geodesic PCA (GPCA) of probability measures in the Wasserstein space. However, the implementation of GPCA for data analysis remains a challenging task even in the simplest case of pdf supported on the real line. The main purpose of this talk is thus to present a fast algorithm which performs an exact GPCA of pdf with support on the real line, and to show its usefulness for the statistical analysis of histograms of surnames over years in France.
Summary: Many problems in geometric optics or convex geometry can be recast as optimal transport problems: this includes the far-field reflector problem, Alexandrov’s curvature prescription problem, etc. A popular way to solve these problems numerically is to assume that the source probability measure is absolutely continuous while the target measure is finitely supported. We refer to this setting as semi-discrete optimal transport. Among the several algorithms proposed to solve semi-discrete optimal transport problems, one currently needs to choose between algorithms that are slow but come with a convergence speed analysis (and rely on coordinate-wise increments) or algorithms that are much faster in practice but which come with no convergence guarantees (Newton/quasi-Newton). In this talk we will present a simple damped Newton’s algorithm with global linear convergence and which is also very efficient in practice, when the cost function satisfies the so-called Ma-Trudinger-Wang regularity condition. Joint work with Jun Kitagawa and Boris Thibert.
Inverse problems in geophysics
C. Lauvernet (IRSTEA):
Accounting for spatial structures in variational assimilation of remote sensing images in a canopy model.
Summary: Information contained in time series of image data should be explicitly exploited in data assimilation methods instead of operating over single pixels. This study proposes to adapt a variational data assimilation method of LAI (Leaf Area Index) images in a crop model. The method assumes that the parameters are governed spatially at some levels (cultivar, field, and pixel), while some of them are assumed to be stable temporally over the whole image. Such constraints help at reducing the size of the inverse problem, transforming the usual assimi- lation scheme into simultaneous pixel patterns. DA with constraints is applied to the semi- mechanistic model BONSAÏ and evaluated by twin experiments both on the quality of LAI prediction and on parameter estimates. Sensitivity to the observations frequency is also eval- uated. The constraints improve the method’s robustness and estimates when the number of observations available decreases, compared to the conventional method..
Summary: T.B.A.
Summary: A common problem in data assimilation comes from misplacement of structures, which is called position error. It stems for example from a bad estimation of the velocity field in the model causing structures to be misplaced. Under such position errors, distances used in classical data assimilation lead to a poor analyzed state.
Recently, optimal transport theory has become widely used in image processing, from image classification to color transfer through image segmentation and movie reconstruction. This theory defines the Wasserstein distance which looks for the optimal map transporting a density onto another one. One of its characteristics is to consider a density more as a positioning of different structures than as a real-valued function. In particular, the Wasserstein distance handles well data subject to position errors.
For this reason we investigate the use of such a distance in variational data assimilation. In the cost function, the difference between the observations and their model counterparts is computed using the Wasserstein distance. In this talk, we present such a cost function. In simple examples containing position errors, it shows promising results.
Summary: T.B.A.
Design and applications of metamaterials
Summary: Negative index materials are artificial structures whose refractive index are negative over some frequency range. These materials were first investigated theoretically by Veselago in 1964 and their existence was confirmed experimentally by Shelby et al. in 2001. In this talk, I will discuss recent mathematics progress on negative index materials in particular on cloaking and superlensing using complementary media, cloaking a source and an object via anomalous localized resonance, and their stability. There are two main difficulties in the study of negative index materials: 1) the ellipticity and the compactness are lost in general due to sign changing coefficients of modelling equations; 2)the localized resonance, i.e., the fields blow up in some regions and remain bounded in some others as the loss (the viscosity) goes to 0, might occur..
Summary: The interior transmission eigenvalue (ITE) problem is not self-adjoint and not compact in general which arises in inverse scattering theory. In this talk, we will discuss various criteria for the discreteness of ITE problem that are obtained via a priori estimates and the spectral theory of compact operators. A variational and a Fourier transform approaches will be used to obtain the priori estimates. This talk is based on joint work with Hoai-Minh Nguyen, EPFL, Switzerland. .
Summary: Composite media made of fine mixtures of dielectric materials and metals present very peculiar properties that are quite interesting for applications. In particular, negative index materials have recently attracted a lot of attention int the mathematical community for their cloaking and superlensing properties. In this talk, we concentrate on hyperbolic metamaterials and present how superlensing can be achieved using such media, in the limiting case when losses become small. This is joint work with Hoai-Minh Nguyen..
Shape optimization: from theory to applications
Chairman: .+
Summary: Minimal partition problems consist of finding a domain partition that minimizes a given cost. In order to model physical phenomena or guarantee well-posedness and regularity properties, this cost often involves interface energy terms defined by integrals over the interfaces. In this talk I will focus on two-dimensional domains and energies defined by a weighted sum of the lengths of the interfaces. I will show how the resulting problem can be solved by an original approach based on partial differential equations, Gamma-convergence and duality techniques. I will present theoretical results, algorithms, and numerical examples including image inpainting and deblurring.
Summary: We are interested in the analysis of a well-known free boundary/shape optimization problem motivated by some issues arising in population dynamics. The question is to determine optimal spatial arrangements of favorable and unfavorable regions for a species to survive. The mathematical formulation of the model leads to an indefinite weight linear eigenvalue problem in a fixed box $\Omega$ and we consider the general case of Robin boundary conditions on $\partial\Omega$. It is well known that it suffices to consider bang-bang weights taking two values of different signs, that can be parametrized by the characteristic function of the subset $E$ of $\Omega$ on which resources are located. Therefore, the optimal spatial arrangement is obtained by minimizing the positive principal eigenvalue with respect to $E$, under a volume constraint. By using symmetrization techniques, as well as necessary optimality conditions, we prove new qualitative results on the solutions. Namely, we completely solve the problem in dimension 1, and prove the counter-intuitive result that the ball is almost never a solution in dimension 2 or higher, despite what suggests the numerical simulations. This is a joint work with Antoine Laurain, Grégoire Nadin, Yannick Privat..
Summary: We consider a shape minimization problem which involves the perimeter jointly with a repulsive capacitary term. This problem aims to model the equilibrium configurations of liquid droplets provided with an electric potential. We will discuss existence and regularity of minimisers in several situations. The talk is based on joint (and ongoing) works with M. Goldman, C. Muratov and M. Novaga.
Contributed talks: Session 1
Chairman: T.B.A.+
Summary: In this talk we prove a null controllability result for the Vlasov-Navier-Stokes system in the two-dimensional torus with small data by means of an internal control. We show that one can modify the distribution of particles in the fluid, in large time, as well as the associ- ated velocity field, from any initial and regular distribution to the zero steady state. In other words, we can modify the non-linear dynamics of the system in order to absorb the particles and force the fluuid to attain the equilibrium. The proof of the main result is achieved thanks to the return method and a Leray-Schauder fixed-point argument. .
Contributed talks: Session 2
Chairman: T.B.A.+
Summary: We are interested in a material time evolution model that may exhibit several dissipation mechanisms, plasticity, fracture and viscous dissipation, from the point of view of the mathematical analysis and the numerical simulation. The fracture is modeled using the Ambrosio-Tortorelli functional, following the work of B. Bourdin. We approximate a continuous time elasto-viscoplastic evolution, via semi-discrete time evolutions obtained solving incremental variational problems. We then prove an existence result to the continuous model when a time discretization parameter converges to zero. We study this model numerically and we show in particular that for some mechanical parameters, different dissipation mechanisms can be expressed. We numerically reproduce the crack propagation during the initial phase of the plasticine experiment of G. Peltzer and P. Tapponnier, which models the action of the Indian plate on the Tibetan plateau and the resulting geological faults. (This talk is based on a joint work with E. Bonnetier and S. Labbé)
A. Theljani (
ENIT-LAMSIN): Effective image inpainting and restoration based on combined second- and fourth-order diffusion model
E. Jaïem (
ENIT-LAMSIN): Cavities identification from partially overdetermined boundary data in linear elasticity.
Contributed talks: Session 3
Chairman: T.B.A.+
R. Chamekh ENIT-LAMSIN: A Nash-game approach to solve the Coupled problem of conductivity identification and data completion
Poster session
Please click to unroll the list of poster contributions+
J. Lassoued (Lamsin): Stability results for the parameter identification inverse problem in cardiac electrophysiology
M. Riahi (
ENIT-Lamsin): Simultaneous estimation of two parameters in a porous media
K. Alyani (
ENIT-Lamsin): Factor Analysis Using the Bhattacharyya Discrepancy Function
K. Alyani (
ENIT-Lamsin): Diagonality Measures of Hermitian Positive-Definite Matrices and Applications
R. Badreddine (
ENIT-Lamsin): Image segmentation in presence of multiplicative noise
M. Kadri (
ENIT-Lamsin): Extension aux comportements non-linéaires de la méthode d'identification de Stecklov-Poincaré.
H. Houichet (
ENIT-Lamsin): Topological gradient approach to speckle noise removing and edge detecting in ultrasound images
H. Sakly (
University of Gafsa): On the shape derivative of the volume integral operator in electromagnetic scattering by homogeneous bodies